AR as a civil right

CHAPTER

Civil Rights

INFORMATION IN THIS CHAPTER:

  • •  Civil rights for the disabled

  • •  How AR can help the disabled

  • •  AR as a civil right

INTRODUCTION

With all the wonder and anticipation that surrounds augmented reality, it is easy to forget that there are already millions of people whose experience of reality through their five senses has already been involuntarily altered. More than 50 million Americans - about 18% of our population - have some form of disability. Whether their condition impairs one or more of their five senses, their freedom of movement, or their cognitive abilities, these individuals do not enjoy the same capacity to experience reality that others have.

When discussing this fact in the context of AR, it is tempting to make the observation - as I have mistakenly done in the past - that disabled persons already experience an “augmented reality.” That is, until one remembers that to “augment” means to “make greater or larger.” Physical and mental disabilities do anything but. They “substantially limit[] one or more of the major life activities of such individual,”1 3 thereby diminishing that individual’s opportunity to experience physical reality in ways that others take for granted.

Yet AR does have an important role to play for these individuals. Custom-designed augmented world devices could go a long way toward bridging the experiential gap imposed by disabilities. Although the end result may be only an approximation of typical human experience, the inherent value of the augmentation to that individual certainly could be significantly more meaningful than an equivalent improvement in a normally abled person’s experience.

The United States and many other nations have various laws on the books designed to encourage providers of goods and service to make extra effort to accommodate the disabled, in order to minimize the degree to which disabilities keep people from enjoying everyday life experiences. When certain methods of accommodation become sufficiently economical and logistically feasible, they tend to become requirements instead of suggestions. As AR technologies improve, it seems inevitable that some of them will first be encouraged, and ultimately become prescribed, methods of accommodating disabled persons.

THE CURRENT REQUIREMENTS FOR ACCOMMODATING THE DISABLED IN DIGITAL MEDIA

THE GOVERNING LEGAL FRAMEWORK

The Americans with Disabilities Act of 1990 (ADA) is the flagship of legal protection for the disabled in the United States. It was adopted to ensure, among other things, that no one is “discriminated against on the basis of disability in the full and equal enjoyment of the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.”2 The law has required public and private entities across the country to make a number of significant accommodations in the way they do business, and modifications to their physical structures, to assist disabled individuals. Various other Federal3 and state laws supplement these protections.

By and large, these laws have received broad, bipartisan support, and have even been strengthened over the years. But striking the right balance between accommodating the disabled and respecting the liberty and economic interests of businesses is not always simple, especially in light of how quickly technological and economic realities change.

When regulations requiring accommodation are perceived as imposing too heavy a burden on businesses, a backlash can erupt. For example, in 2010, the Department of Justice published updated regulations under the ADA. These regulations adopted the 2010 Standards for Accessible Design, which, for the first time, contain specific accessibility requirements for many types of recreational facilities, including swimming pools, wading pools, and spas. In January 2012, the Department issued guidance titled “ADA 2010 Revised Requirements: Accessible Pools - Accessible Means of Entry and Exit” to assist entities covered by Title III of the ADA, such as hotels and motels, health clubs, recreation centers, public country clubs, and other businesses that have swimming pools, wading pools, and spas, in understanding how the new requirements apply to them. Many owners of such businesses, however, did not care at all for what they learned, sparking a firestorm of criticism. The DOJ relaxed its enforcement of the new regulations a bit, by delaying the deadline for implementation and emphasizing that “there is no need [under the ADA] to provide access to existing pools if doing so is not ‘readily achievable,’4 especially in a weak economy. 5

Pressure to provide more accommodation is building on the international level as well. The UN Convention on the Rights of Persons with Disabilities - the first human rights treaty of the twenty-first century - was opened for signature in 2007. The United States is one of the 149 member states to sign the treaty, although Congress has not ratified it as of this writing.

The same debates will continue in the augmented world. As noted above, the points of conflict in today’s world are over such issues as building ramps into swimming pools and making sure sidewalks have adequate curb cuts - because these are “readily achievable” means of providing equal access to “places of public accommodation.” Tomorrow, as virtual “places” become more important venues for commerce and entertainment, the fight will likely be over equal access to those experiences. It will be interesting to observe whether the law continues to view such immersive content as purely software and speech,5 or according to the metaphors of physicality and place that we use to describe them. If the latter, then we may see laws treat massively multi-participant virtual experiences as “places of public accommodation,” at least for purposes of civil rights laws. The question will then be which forms of ensuring equal access to those “places” are “readily achievable” for its creators to provide.

DIGITAL ACCOMMODATION IS STILL IN ITS EARLY STAGES

Equal access standards are only beginning to make an impact in digital technology. Section 255 and Section 251(a)(2) of the Communications Act of 1934, as amended by the Telecommunications Act of 1996, require manufacturers of telecommunications equipment and providers of telecommunications services to ensure that such equipment and services are accessible to and usable by persons with disabilities, if readily achievable. The UN Convention likewise “recognizes access to information and communications technologies, including the Web, as a basic human right,”6 7 but that standard will not apply in the United States unless Congress ratifies and implements the treaty.

The Federal government actually holds itself up to a higher standard than others in this area. Section 508 of the Rehabilitation Act of 19738 was first added in 1986, and has been updated several times since. This legislation requires all Federal departments and agencies to “ensure ... that the electronic and information technology [they develop, procure, maintain or use] allows [disabled persons] to have access to and use of information and data that is comparable to the access to and use of the information and data”9 by individuals without disabilities. This provision applies “regardless of the type of medium of the technology,”10 but comes with a variety of

FIGURE 9.1

Chief Operations Officer Torsten Oberst demonstrates FEDVC’s software, which reads web page content aloud at a 2011 program highlighting Section 508.11

caveats, including an exception for when “an undue burden would be imposed on the department or agency.”12

Nevertheless, these standards put the Federal government ahead of most private providers in terms of access to digital materials. As applied to online resources, such as websites, Section 508 and its enabling regulations are modeled after the access guidelines developed by the Web Accessibility Initiative of the World Wide Web Consortium, or W3C (Fig. 9.1).13 The W3C advertises that its “guidelines [are] widely regarded as the international standard for Web accessibility.”14 Indeed, some of the W3C’s basic tips for making websites more accessible - such as alternative text for images, allowing for input by keyboard instead of a mouse, and transcripts for podcasts15 - are becoming increasingly common. That said, such standards still remain largely voluntary in most circumstances.

Applying such laws to the digital economy, however, is tricky. For one thing, so much digital content is inherently audiovisual in nature that it may be either practically impossible, or at least very expensive, to create satisfactory accompaniments for the blind or deaf - especially in light of the sheer volume of digital data available. What is more, there is not the same tradition of access to such materials as there are for more basic functions such as climbing stairs and crossing streets. On the other hand, our society becomes more dependent on digital and online data with each passing day, meaning that those without meaningful access to that world are getting increasingly left behind.

Therefore, advocates on both sides of the issue tend to be particularly vocal when disputes arise. For example, in March 2012, a federal judge in California allowed the Greater LA Council on Deafness to proceed with a lawsuit against CNN for failing to provide close captioning for videos on its website. Similar accommodations for the deaf have become customary in many television broadcasts, which has created an expectation for similar options with online video.

To date, close captioning has been required online only for video that was originally broadcast on television. In 2012, the Federal Communications Commission required closed captioning only in full-length TV shows that were rebroadcast online. By 2016, this requirement will be extended to “so-called ‘straight lift’ clips using the same audio and video .... A year later, the rule will apply to montages involving multiple straight-lift clips. And by mid-2017, closed captioning will be required on live and near-live TV over the web, including news and sports.”16 Some providers already go beyond these minimum expectations, however - such as the text transcripts that YouTube auto-generates for many of its videos - and there is every reason to believe that the requirement will eventually be imposed on more, if not all, video content provided online, and perhaps other content as well.

Indeed, in June 2012, a judge in Massachusetts became the first to rule in favor of those seeking to require Netflix to close caption its online video.17 The foundation of this ruling was the judge’s finding that Netflix and other websites had become “placets] of public accommodation” - the first time any court had reached that conclusion. The implications of this holding are not modest. Leading internet law commentator Eric Goldman, of the Santa Clara University School of Law, saw this as a dangerous and errant deviation from previously settled law that threatened to do real damage to internet commerce:

This is a bad ruling. Really terrible. It’s ... potentially ripped open a huge hole in Internet law. ... If websites must comply with the ADA, all hell will break loose. Could YouTube be obligated to close-caption videos on the site? (This case seems to leave that door open.) Could every website using Flash have to redesign their sites for browsers that read the screen? I’m not creative enough to think of all the implications, but I can assure you that ADA plaintiffs’ lawyers will have a long checklist of items worth suing over. Big companies may be able to afford the compliance and litigation costs, but the entry costs for new market participants could easily reach prohibitive levels.18

Nevertheless, that case settled a few months later, with Netflix agreeing to caption 100% of its videos by 2014, and to reduce the time the service takes to add captions to new streaming content down to 7 days by 2016.19 The National Association for the Deaf, which brought the lawsuit, heralded this agreement as “a model for the streaming entertainment industry,” although it’s still not clear two years later whether the Netflix model will indeed become the norm anytime soon. As of this writing, courts remain “split about the extent to which private websites are subject to the accessibility requirements of Title III of the Americans with Disabilities Act (ADA), and the U.S. Department of Justice (DOJ) has not yet published any clear regulations about the issue.”20

Meanwhile, Federal regulators continue to push for greater accommodation not only by websites, but in mobile applications as well. In June 2014, the Department of Justice reached its first settlement agreement that included a provision requiring the settling party to make its mobile app ADA-compliant. “The [DOJ’s] investigation found that the [Florida State University] Police Department’s online application form asked questions about a past or present disability and other medical conditions in violation of the ADA.”21 One of the steps FSU agreed to take to rectify the problem was “ensuring that the FSU Police Department website, including its employment opportunities website and its mobile applications, conform to the Web Content Accessibility Guidelines 2.0 Level AA Success Criteria and other Conformance Requirements (WCAG 2.0 AA).”22

These developments demonstrate the growing political and legal pressure to make digital media more accessible to disabled persons. Especially in light of the increasing median age in the United States and other developed nations,23 there is every reason to expect this trend to continue. This means that those developing the augmented world should proactively include access concerns in their design strategies from the very beginning. It also suggests there will be lucrative markets available for digital solutions that enhance access to digital content.

HOW AR CAN MEANINGFULLY IMPROVE THE LIVES OF DISABLED PERSONS

Fortunately, the augmented medium provides several natural methods of enhancing disabled persons’ access to digital information. Again, Google is largely responsible for sparking most of the public conversation on this topic because its Glass device was the first digital eyewear to get widespread attention. (Not to mention the “smart” contact lenses that Google announced in July 2014, which are said to be capable of “monitor[ing] the wearer’s blood sugar levels.”)24 Some have said of Glass that “not since the invention of text-to-voice and other speech-recognition software has a tech invention had such potential to help the disabled.”25 The issues and opportunities Glass raises, however, apply to the entire category of wearable computing.

THE DEAF

Enormous sums of money and political capital have been spent to achieve the modest improvements in close captioning availability that resulted from the Netflix and CNN cases and related FCC regulations. Yet AR-infused eyewear could accomplish far more in terms of giving the deaf access to the everyday world.

Google Glass has already offered a glimpse of what this future might look like. Because the device (by default) conveys sound through bone conduction - i.e., through the skull directly into the inner ear - rather than headphones, it actually allows even many deaf persons to perceive the sounds. Digital marketing professional David Trahan, who is deaf in his right ear, experienced this first-hand. The audio produced by his Glass device allowed him to hear through his right ear for the first time, and now it has become an integral part of his life.26

Combining digital eyewear with speech recognition software has the potential to radically enhance life for deaf individuals by essentially close captioning anything and everything in life. People wearing such digital eyewear could potentially see the words of someone speaking to them superimposed on their field of vision in more-or-less-real time. Obviously, technological barriers to such devices still remain. Software would need to improve, and it would need to sync with directional microphones that could isolate the speaker’s voice from the background noise. But the impressive quality of voice recognition products like Dragon Naturally Speaking and Siri bring hope such a product is not far off.

Voices are not the only sounds that deaf people could benefit from “hearing.” Wearable devices could be programmed to recognize and alert to the telltale sales of oncoming traffic, traffic control signals, music, alarms - all the sounds that others take for granted every day - and display appropriate text notices in the user’s field of view.

These solutions would allow a deaf person to understand the sounds around them, but could AR help the deaf communicate? For more than a decade, researchers have been working toward that exact goal. As early as the 2003 IEEE International Symposium on Wearable Computers, a team demonstrated the ASL OneWay, a tool designed to help the deaf community to communicate with the hearing by translating American Sign Language.27 The device consists of a set of sensors in a hat worn by the signer, and two wristwatch-sized devices, one on each hand. The system recognizes the hand gestures that make up a sign and deduces the English phrases most closely associated with the signed phrase. The deaf person sees these phrases in his eyewear and selects the appropriate one, which the device then speaks through a speaker in the hat. As one would imagine, these prototypes were a bit cumbersome, but the concept was potentially revolutionary.

More recently, researchers at the multi-university project MobileASL have been working to develop visual recognition software capable of detecting hand gestures and transmitting them in real time over standard mobile phone networks (Fig. 9.2).28 Other projects hope to someday be able to translate the signs in to written or spoken speech in near-real time. These projects have advanced far in the past decade, and it is feasible to imagine them installed in digital eyewear within the coming decade. Likewise, designers have at least begun to conceive of gloves that can track the wearer’s gestures in three dimensions, also providing instant translation from sign to speech. Such developments could increase deaf persons’ ability to integrate into society by orders of magnitude.

Once such on-the-fly captioning becomes even marginally feasible, we are likely to see political pressure grow to make the technology available to the deaf community. The first implementations will almost certainly be voluntary, by providers who seek

FIGURE 9.2

The MobileASL project.

to distinguish themselves from their competitors. By that time, much of the programming that we currently receive on televisions may be broadcast on eyewear instead, meaning that the same close captioning rules that currently apply to TV will be in force there as well. What is more, the deaf will not be the only market for the technology. Just like noisy bars will activate the close captioning feature of their televisions to allow patrons to follow the programming, normally abled people could encounter situations in which they too can benefit from technology originally intended for the disabled.

Once the technology gains a track record, insurance companies may begin to subsidize it for persons who lose their hearing as a result of injury or disease. Government officials and politicians who today ensure that a sign language interpreter is present with them onstage may instead make live-captioning eyewear available to those in the crowd who need it. Eventually, provisions like the Rehabilitation Act may require Federal employees to provide such “access” to their live speeches. By various means, live close captioning in the physical world will eventually become commonplace.

THE BLIND

Games like Inception the App ,2 which “uses augmented sound to induce dreams,” already promise to digitally augment our sense of hearing. AR devices could accentuate 29 the hearing of blind individuals in a way analogous to the visual information it could provide for the deaf. Users could receive audible alerts when they come into proximity with a person, vehicle, traffic control device, sign, or any of a hundred other significant objects. In 2012, Japanese telecommunications giant Nippon Telegraph and Telephone Corp. developed a prototype pair of glasses designed to do just that. Running the company’s “SightFinder” technology, the device “sends streaming images from a camera to one of NTT’s data centers to recognize and identify street signs or potential obstacles. In real time, NTT’s computers analyze the images and provide warnings - street construction causing a detour or a cone in front of a pothole -via an Internet-connected device like a smartphone to help the visually impaired to move freely.”30

Dr. Peter Meijer, a senior scientist at Philips Research Laboratories in the Netherlands, has been working toward this goal for years. His software, called the “vOICe,” is “a universal translator for mapping images to sounds.”31 Already available as a free Android app, the software uses a mobile phone’s camera to take an audio snapshot of the user’s surroundings, associating height with pitch and brightness with loudness. Presumably once the user grows accustomed to this system, it will become second nature and allow the blind to roam more confidently than is possible with a mere walking stick for guidance. As of this writing, the app has earned an average of 3.5 out of 5 stars from more than 77,000 reviews in the Google Play store - a respectable indication of real utility.

One could imagine similar functionality being added to Word Lens or almost any other visual recognition app, allowing the app to audibly explain to the user what it sees. The blind community has certainly imagined this future. One sight-impaired Explorer shared his thoughts after testing Glass:

I imagine a future where Glass can read a menu to me in a restaurant. A simple glance at the menu and glass recognises the text and begins to read aloud. Or perhaps, opening a book and have it read aloud, reading a book - that is something I have not been able to do in a long time. Object recognition, the ability to identify objects in a specific scene, or recognise my friends and acquaintances, and speak their names in my ear. Essentially, Glass would allow me to more readily operate in social environments, fill in the gaps created by my lack of vision.32

The impact of such advances would be so profound for blind individuals that they are likely to become common and even required by the same mechanisms discussed above with respect to accommodations for the deaf.

As one step in that direction, the “vOICe for Android has already been demonstrated to run on Google Glass, letting the blind ‘see’ for themselves and get visual feedback in a second. A talking face detector and color identifier is included.”33 A significant caveat to this idea is the limited battery life of Glass in its current form. Users are cautioned to use an external battery, and even then “[i]t is recommended to run the vOICe for Android on Google Glass only for up to a few minutes at a time, to avoid overheating risks.”34 These present-day limitations, however, have not tempered the excitement Glass has stirred within the blind community, with some already calling it “a blind man’s window into the world.”35

Dr. Meijer has proposed an even more radical version of this idea by integrating the vOICe app directly into a dedicated eyewear device promises “synthetic sight” by essentially hacking the brain to accept audio signals as visual images. According to Meijer’s website, “neuroscience research has already shown that the visual cortex of even adult blind people can become responsive to sound, and sound-induced illusory flashes can be evoked in most sighted people. The vOICe technology may now build on this with live video from an unobtrusive head-mounted camera encoded in sound.”

Digital eyewear also offers a promising new platform for apps like VizWiz, which might be described as crowd-sourced AR. Currently a smartphone app, VizWiz allows blind people to upload pictures of their surroundings and ask questions about them, then get feedback from seeing persons around the globe. “Where smartphonebased VizWiz users have to contend with the inherent hassle of ‘using a handheld device while blind, Glass offers the chance to provide continuous, hands-free visual assistance,’36 according to the service’s founder.

Of course, audio signals are not the only way to enhance life for the blind. Those who read Braille could still benefit from enhanced haptic technology. In theory, the feel of virtually any surface could be augmented with additional sensory feedback, including in the Braille language. Therefore, a blind person wearing a haptic glove could “feel” Braille text on any surface, without that writing physically being there.

THE PHYSICALLY HANDICAPPED

Digital information alone can’t do anything to increase the mobility of those with physical impairments. Better databases and way finding applications, however, could make it a lot easier to find the accommodations designed to make their lives easier. For example, Mapability,37 an existing data layer on the Layar browser, helps the disabled locate the nearest wheelchair-accessible venue.

The introduction of Glass has also done much to illustrate how digital eyewear can improve the lives of the physically disabled. Just the simple ability to take pictures and video has been a sea change in the lives of disabled users. One Glass Explorer wrote, “[m]y injury was a spinal cord injury that occurred in 1988 and yesterday I was able to take a picture unassisted for the first time in 24 years!”38 Similar stories abound.

The disabled have even had a hand in helping to overcome the limitations of first-generation devices like Glass. Because Glass is still (as of this writing) a beta product, its Explorers have an active voice in shaping future enhancements and revisions. Disabled users have made such suggestions as making the volume control less buried in the command menu,39 decreasing the sensitivity of the touchpad,40 and allowing alternate methods of controlling the video camera (because those without use of their hands cannot tap the touchpad, as is presently required).41 Input like this allows the device (and those that come after it) to be designed with accessibility in mind.

One specific community that has benefited from Glass has been those with Parkinson’s disease, which causes uncontrollable tremors. “With custom apps, experts have tuned Glass to provide subtle alerts reminding volunteers to take their medication and notify them of upcoming medical appointments. Sufferers are also prompted to speak or swallow to prevent drooling. Glass’ motion sensors are put to good use too, preventing patients from ‘freezing’ by displaying visual cues to help them unblock their brain and regain a flow of a movement.”42 One can easily imagine those afflicted by any number of diseases with analogous physical symptoms to benefit from the same functionality.

For example, patients with ALS (Lou Gehrig’s disease) and muscular dystrophy -both of which lead to loss of motor control throughout the body - have experienced increased quality of life through digital eyewear. One volunteer who works with ALS patients said, “Some patients have no use of their hands, and others are losing their

FIGURE 9.3

EyeSpeak by LusoVU.

vocal abilities. But they talk to Glass and it understands them.”43 In July 2014, the Portuguese company LusoVU successfully completed a Kickstarter campaign for its own “augmented reality glasses” called EyeSpeak (Fig. 9.3).44 This device - which was inspired by the CEO’s father being diagnosed with ALS - is designed to capture in a wearable device the same eye-tracking technology currently used by desktop computers to turn patient’s eye movements into written letters and words.

Because the impairments suffered by these communities are more often the result of injury, disease, or advanced age, they are more likely to receive AR devices through their health insurance provider or medical professional.

THOSE WITH COGNITIVE IMPAIRMENTS,

LEARNING DISABILITIES, AND EMOTIONAL TRAUMA

One study in Ohio created simulated virtual environments to aid the rehabilitation of those with traumatic brain injuries and other cognitive impairments.45 Similarly, Dr. Helen Papagiannis - a designer, researcher, and artist specializing in AR - has written an AR pop-up book designed to let those suffering from phobias directly encounter their fears in augmented space46 called “Who’s Afraid of Bugs?” the book

FIGURE 9.4

Sension app for autistic users.

features various insects that appear to come alive through a companion AR app. It “was inspired by AR psychotherapy studies for the treatment of phobias such as arachnophobia. AR provides a safe, controlled environment to conduct exposure therapy within a patient’s physical surroundings,” Papagiannis writes, “creating a more believable scenario with heightened ‘presence’ and greater immediacy than Virtual Reality (VR).”47

Wearable devices even hold promise for those with severe neuro-psychological impairments. The startup company Sension, for example, develops software that recognizes people’s emotional state by analyzing their facial expression (Fig. 9.4). The company’s Glass software maps 78 points on the face and labels the faces with onscreen keywords like “happy” and “angry.”

“Emotional recognition (software) is still in its early days, at about the state of a 3-year-old, but I still felt passionate about trying to do something meaningful,” says Sension founder Caitalin Voss. “From my personal experience, I know that the issues (for my cousin) are recognizing an expression, and then smiling back. Glass is good for the first, and can help with the second.”48 Here, too, health insurance companies and medical professionals will be the most likely source AR-based treatments.

The possibilities for AR in the educational field are seemingly endless. Brad Waid and Drew Minock have been at the forefront of this topic for some time. They were each elementary school educators in Bloomfield Hills, Michigan, when they began touring the country teaching other educators how to use AR in the classroom. Now employed by Daqri to do the same work on a broader scale, they have seen countless educators break down barriers to learning and open up exciting new pedagogical possibilities with AR applications. For example, AR allows kinesthetic learners their best opportunity yet to interact with digital objects in a way that fits their learning style.

Although these techniques offer new worlds of possibilities for all kids, the potential is particularly tantalizing for kids with learning disabilities and other barriers to comprehension. Educators are currently limited in what they can offer by such pesky constraints as budgets, resources, and the laws of physics. AR overcomes those barriers by virtually replicating and allowing students to meaningfully interact with anything they can imagine. Kids who need to learn through particular senses can have their instruction tailored to those needs.

Driven by such legal mandates as the Individuals with Disabilities Education Act (IDEA), which was reauthorized in 2004, the public education system is constantly searching for alternative methods to teach kids who do not respond to traditional pedagogical techniques. For example, the IDEA requires that a meeting of parents, educators and other professionals be convened for each student with special needs, resulting in an Individualized Education Plan (IEP) designed to accommodate the child’s specific disabilities.

IDEA 2004 already requires IEP teams to consider the use of “assistive technology” so as “to maximize accessibility for children with disabilities.”49 An “assistive technology device” is defined as “any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of a child with a disability.”50 IDEA defines an “assistive technology service” as:

“any service that directly assists a child with a disability in the selection, acquisition, or use of an assistive technology device. Such term includes

  • (A) the evaluation...

  • (B) purchasing, leasing, or otherwise providing for the acquisition of assistive technology devices...

  • (C) selecting, designing, fitting, customizing, adapting, applying, maintaining, repairing, or replacing...

  • (D) coordinating and using other therapies, interventions, or services with assistive technology devices...

  • (E) training or technical assistance for such child, or ...the family of such child...

  • (F) training or technical assistance for professionals...”51

The Act also requires schools to provide training in the assistive technology for the teachers, child, and family.52

These statutory provisions already provide the legal foundation for requiring AR-based tools as part of a disabled child’s IEP. Once educators have a sufficient track record with AR pedagogical tools to prove their effectiveness - which, thanks to passionate educators like Waid, Minock, and many others, will not be long - we could very soon see conversations about augmented reality happening in IEP meetings across the country.

The incredible promise of augmented world devices and experiences to improve the lives of the disabled suggests that legal incentives and sanctions will soon encourage or require its use in various contexts. The concepts discussed in this chapter highlight some of the rationales by which that may be accomplished.

CHAPTER

Litigation Procedure

10

INFORMATION IN THIS CHAPTER:

  • •  Evidence and V-Discovery

  • •  In the courtroom

  • •  Exercising personal jurisdiction

INTRODUCTION

On the whole, the legal profession is a conservative institution. It does not move quickly to adopt new technologies or change the way it does things. To the contrary, it serves to bring some semblance of balance and consistency to a rapidly changing society by applying the lessons of the past to solve today’s disputes. Change does come even within the legal system, however, and like everything else in contemporary life, change seems to be coming at a faster rate than it did before.

So it will be with augmented reality. As of this writing, only a handful of disputes involving AR have made it through the legal system. It will be some time before use of the technology becomes anywhere near commonplace within the system itself. Nevertheless, some legal innovators have already begun to see the value AR can add in the way they do their jobs and represent their clients’ interests. As the ability to tell stories in AR improves, it should become a more frequently used tool for legal persuasion. And before any of that change sets in, lawyers will likely be scrambling to understand and adapt to the AR data that their clients create in the course of their more-rapidly adapting businesses.

GATHERING EVIDENCE FOR USE IN LEGAL PROCEEDINGS

One of the main attractions of digital eyewear is its ability to capture users’ experiences from their own first-hand perspective, in a hands-free manner that prevents the device from interfering with what it’s recording. The advertisements and popular apps in this space emphasize the use of such capabilities for recording fun, recreational activity, like playing with kids, shopping, and even skydiving. As with anything else in AR, though, these devices simply provide a platform. It’s up to the user to determine the content.

Augmented Reality Law, Privacy, and Ethics

259

Copyright © 2015 Elsevier Inc. All rights reserved.

One group of people that cares quite a bit about accurately reproducing scenes from everyday life as accurately as possible is the legal profession. Law enforcement officers, detectives, inspectors, and lawyers all seek to gather and preserve evidence of what other people are doing in order to be able to accurately retell that story in the neutral context of a courtroom. Just as wearable technology holds the promise of being able to capture moments more accurately and uniquely than other methods, so too does it stand to enhance the ability to introduce those experiences into evidence.

MOBILE VIDEO AS AN INTENTIONAL MEANS OF GATHERING EVIDENCE

United States courts already have a predisposition in favor of video evidence. In the landmark case Scott v. Harris,53 decided in 2007, the United States Supreme Court announced what almost amounts to a per se rule of deference. The plaintiff in that case alleged that the defendant police officer had used excessive force by ramming the plaintiff’s car during a high-speed chase, causing a crash that badly injured the plaintiff. The lower courts had allowed the lawsuit to proceed, determining that a reasonable jury could rule for either party based on its interpretation of the evidence.

The Supreme Court in Scott reversed, holding as a matter of law that the evidence could only support a judgment in favor of the officer. Its primary basis for reaching this conclusion was the “existence in the record of a [dashcam] videotape ... [that] quite clearly contradicts the version of the story told by [plaintiff.]”54 Specifically, the video demonstrated that plaintiff’s driving had “resemble[d] a Hollywood-style car chase of the most frightening sort, placing police officers and innocent bystanders alike at great risk of serious injury,”55 56 and therefore justifying the degree of force used by the defendant officer. The guiding principle for future cases set forth by Scott is one that commands deference to unrebutted video evidence:

When opposing parties tell two different stories, one of which is blatantly contradicted by the record, so that no reasonable jury could believe it, a court should not adopt that version of the facts for purposes of ruling on a motion for summary judgment.

That was the case here with regard to the factual issue whether respondent was driving in such fashion as to endanger human life. [Plaintiff’s] version of events is so utterly discredited by the record that no reasonable jury could have believed him. The Court of Appeals should not have relied on such visible fiction; it should have viewed the facts in the light depicted by the videotape.11

Of course, video recordings can be altered. “There [were] no allegations or indications that [the Scott] videotape was doctored or altered in any way, nor any contention that what it depicts differs from what actually happened.”57 Video editing technology has come a long way even in the years since Scott, such that litigants in later cases may have to do a little more work to prove the authenticity of their records. But Scott’s rule of deference still governs.

Meanwhile, one of the predominate features of augmented world technology is the proliferation of devices that can record audiovisual footage. It is little wonder, therefore, that people have already begun using these devices for the purpose of gathering evidence to use in court. In April 2014, New York City - which had already experimented with giving its police officers digital eyewear - next decided to give the devices to their restaurant inspectors.58 Around the same time, the Phoenix, Arizona personal injury law firm Fennemore Craig launched a program it called “Glass Action” (groan), through which the firm lent its digital eyewear devices to its clients. “The idea is to let the clients communicate with their lawyers via Glass to show how their injuries impact their daily lives. Ultimately, Fennemore Craig hopes to turn these communications into evidence.”59 The program is a testament to the fact that the intimacy generated by first-person-perspective video can also subtly influence a jury to empathize with the person behind the recording, “see[ing] the nuances of a victim’s daily challenges firsthand.” 60 It is as close as video evidence can come to putting the viewer in the victim’s shoes.

PRESERVING THREE-DIMENSIONAL EXPERIENCES IN AR

Writing in 2003, police futurists Thomas J. Cowper and Michael E. Buerger foresaw “[a]utomatic sensor readings that calculate distance and height and directly create digital and AR maps for court presentation.”61 By 2011, private accident reconstruction firms in the United States were already beginning to employ 3-D laser scanner for just this purpose, although only on a small scale.62 “Mounted on a tripod, a laser scans the horizon and records up to 30 million separate data points, down to submillimeter resolution. Each sweep takes four minutes, and investigators will typically make four sweeps.... The image can then be processed into a 3-D computer model, allowing investigators to see where the vehicles are located relative to each other, tire skid marks, and other evidence.”63

The following year, as discussed in Chapter 9, Danish researchers presented a multi-sensory system designed to allow investigators to capture a full crime scene in AR:

“The goggles consist of two head mounted 3D-cameras feeding video to a backpack with laptop. With this tech, you’d be free to move and look around while you manipulate the electronic display with a pair of gloves. The left hand brings up a set of menus and tools, while the right hand acts as a pointer. By pointing to a blood splatter or bullet holes (for example), you’d be able to tag them as points of interest in a 3D-model of the crime scene. The system is also set up to completely document the crime scene with a video and audio track. This sort of virtual record would allow a new investigator to explore the crime scene and it may also be accepted as evidence in future court cases.”64

Such technology, however, would be just as useful for civil litigation as for criminal prosecutions, as discussed further below.

GATHERING EVIDENCE FROM DIGITAL REMNANTS

Of course, a fundamental characteristic of digital data is its permanence. Once created - and especially once it is uploaded to a server - digital information is notoriously difficult to ever truly, permanently delete. Therefore, it will not always be necessary or even preferable to use wearable devices to capture events as they happen. Rather, most AR evidence used in legal proceedings will likely be found after the fact, often because it was shared socially by the very person against whom it is to be used.

The bounty of evidence being collected in social media today bears this out. Three particular examples serve as interesting transitional species, so to speak, in the evolution from social media to the augmented world. First, as discussed in Chapter 7, California bicyclist Chris Bucchere struck and killed an elderly pedestrian while Bucchere was competing for the fastest recorded time on the competitive bicycling social network Strava. At a hearing in his prosecution for manslaughter, “data from Bucchere’s Strava account ... had been used to show how fast he had been going and to prove he had ignored stop signs.”65 Likewise, Bucchere’s comments made through the social network after the crash - in which he lamented the “heroic” loss of his helmet - helped establish his reckless disregard for the consequences of his actions.

FIGURE 10.1

Alleged excerpt from Cecilia Abadie’s Google+ account appearing to show a picture taken through Glass while driving.

Another case discussed in Chapter 7 was the first-ever traffic citation for wearing Google Glass while driving, issued to software developer and avid Glass Explorer Cecilia Abadie. She was found not guilty because the officer could not prove that the device was actually turned on while she was behind the wheel. This prompted some internet sleuths to investigate her Google+ and YouTube social media accounts, where they found photos and recordings that appeared to have been taken while driving (Fig. 10.1). Apparently, they also found a message Abadie had posted saying “I just received a message ... while driving.” Of course, none of this “evidence” would have been likely to make a difference in the actual hearing on Abadie’s citation, nor is it conclusive proof that she was the one who made the recordings, or that she did so while driving (which, it should be observed, is not necessarily unlawful or even dangerous; see the discussion in Chapter 7). But it does illustrate the fact that there are evidentiary goldmines online, and that wearable devices will create even more opportunities for lawyers to discover such gems - if they have the stamina and wherewithal to sift through all the available data.

A third, almost-real example came in August 2014, when online news outlets reported that Gainesville, Florida police had used a murder suspect’s interactions with his iPhone to prove he committed the crime, including the “fact” that he had asked Siri where to hide the victim’s body.66 “In addition to the Siri query, [the suspect’s] phone had no activity between 11:31pm and 12:01am on the night [the victim] disappeared. [The suspect] also used the flashlight app on his phone for a total of 48 minutes that day.. ,”67 Although later reports recanted much of this narrative, the fact of its plausibility demonstrates just how many digital remnants we leave behind already using today’s technology. As the world becomes more augmented, even more of our everyday actions will be preserved, allowing others to come back after the fact and reconstruct - or misinterpret - our actions in litigation.

V-DISCOVERY

THE PRECEDENT OF e-DISCOVERY

As these examples demonstrate, advances in digital and computing technologies can make litigation, like anything else, more effective and efficient. Lawyers have so many more tools at their disposal for crafting and communicating persuasive arguments than they did 10, or even five years ago.

But this rapid expansion of technology has also been giving lawyers a whole lot more to do. Generally speaking, any documents, files, emails, spreadsheets, or information that is reasonably likely to reveal evidence that could be admissible in court is fair game for discovery during litigation.68 Increasingly, the digital data stored and exchanged by the people and companies involved in lawsuits are becoming important to the issues being fought over. Especially over the past decade, that has meant that lawyers and their staff often have to gather “electronically stored information” (ESI) during the discovery phase, in addition to the paper documents and testimony - a phenomenon we call “e-discovery,” Therefore, lawyers end up with even more data to sift through in order to figure out what happened than they used to a lot more.

“Perhaps no case could be a more monumental example of the reality of modern e-discovery,” says a 2011 article in the ABA Journal, “than the [then-]ongoing Viacom copyright infringement lawsuit against YouTube filed back in 2008. In that dispute, the judge ordered that 12 terabytes of data be turned over”69 - more than the printed equivalent of the entire Library of Congress. Even after a few years, this example still remains a prodigious monument to the burdens of e-discovery. “Experiences like these,” the article continues, “have left law firms and in-house attorneys scrambling to make sense of the new risks associated with the seemingly endless data produced by emerging technologies like cloud computing and social media.”

How will law firms and litigants cope, then, when augmented reality becomes mainstream, and digital technology leaps off the computer monitor to overlay the physical world? At least four potential problems seem apparent.

ORDERS OF MAGNITUDE MORE DATA

The first problem will be one of volume. Consider this: in 2013, it was calculated that “[a] full 90 percent of all the data in the world has been generated over the last two years.”70 And that was before the wave of wearable communications and healthmonitoring devices that began shortly thereafter.

Companies such as Vuzix and Google already have digital eyewear on the market, and several more are in development. If a site like YouTube can amass enough video footage to make the prospect of reviewing it all seem (quite rightly) ridiculous, how about when everyone is wearing digital eyewear that is capable of recording more or less everything we look at? Will paralegals be sifting through days and weeks worth of mundane, first-person audio and video to find the relevant portions of a litigant’s experiences? As more of our reading takes place on digital devices, we are already creating troves of data about our activities in browser caches and RAM memory. But how much larger will our digital footprints be when everyday physical objects become opportunities (even necessities) for encountering and creating geotagged data?

TRACKING IT ALL DOWN

The second, and closely related, problem will be locating and collecting all of this data. As recently illustrated by the comedic film Sex Tape, it is hard enough nowadays to locate data stored in “the cloud,” which actually means some remote server farm nestled somewhere in the distant hills. Presumably, that data will be stored in even more diffuse ways in an AR world, in which our digital experience is likely to be generated by a mesh of interconnected devices. Whether or not my eyewear will require a centrally broadcast “signal” or “network” in order to function, it will certainly be interacting with any number of signals sent to and from objects that I physically encounter, leaving digital traces of my physical presence behind.

We are already halfway there. Consider Color, the social media startup that gathered a lot of attention for a brief period in 2011. Its premise was to give users access to other people’s photo streams merely by coming into physical proximity to those people. Foursquare, Waze, and similar sites likewise track users’ locations in real time and offer them discounts to businesses near their current, physical location. Once transactions like this become the centerpiece of a lawsuit, will it require lawyers to pinpoint where particular people where when they accessed these apps?

If it becomes relevant in litigation to retrace someone’s steps through an augmented reality, how would one do it? That will depend on how and where the data is stored. If, as is the case today, almost all data resides either on central servers or on the mobile device itself, there will be obvious points for collecting the data. But as the data disperses, it may become necessary to actually visit the locations where the person being investigated traveled, in order to retrieve the bits of digital data they left behind in nearby connected devices. Or perhaps we will all be equipped with personal “black boxes” that keep track of our digital experiences - all too often, probably, for the purpose of uploading them to lifelogs, or whatever social media has by then become.

MAKING SENSE OF FIRST-PERSON AR DATA

A third problem will be one of triangulation. Today, ESI may take various forms, but it all has one thing in common: it’s almost always viewable on a two-dimensional screen. That will not be universally true for much longer. How one perceives augmented reality will depend first on how they’re looking at their physical surroundings. It may not be possible to interpret digital data stored in a server somewhere without knowing exactly where the individual(s) viewing it were located, the direction they were facing, what other data they had open, and so on.

As an example, take the situation discussed in Chapter 5: a trademark infringement lawsuit in which the plaintiff alleges that a virtual version of his trademark was geo-tagged onto the brick-and-mortar location of his competitor’s store, leading confused customers to patronize his competitor instead of his own business. (This is a fairly straightforward extrapolation of all the lawsuits being filed nowadays over sponsored ads in search engine results.) That plaintiff’s claim will rise or fall in part based on how that geotag actually looked to customers. That, in turn, may depend on where the potential customers were when they looked at the logo. Was it visible through the trees, or in the sun? On which AR platforms was it viewable (assuming that there will be multiple service providers)? Did different brands of eyewear render it in the same way? Was it a static display, or did it sense and orient itself toward each individual viewer?

Even more complex issues come into play with other forms of AR, such as haptic feedback. A server or device memory may record that a glove or other haptic device delivered a series of electrical impulses, but determining with any reliability exactly how that felt to the user may not be possible without recreating the experience.

PRESERVATION

Fourth, after courts and the Federal Rules of Civil Procedure began to acknowledge the significance of electronically stored information, it also became clear how easily and frequently individuals and companies were deleting potentially significant evidence - whether intentionally or merely out of ignorance. Out of this realization came the recognition of a duty that every person has to preserve evidence relevant to a potential legal claim - including ESI - whenever that person “reasonably anticipates” litigation over the claim. When someone ought to have that anticipation is necessarily fact-dependent; it could be by receiving a formal complaint or warning that a lawsuit is coming, or when a disagreement becomes sufficiently contentious that a reasonable person would see litigation as a distinct possibility. If the duty is triggered in a corporate setting, the company’s lawyers or other representative will often issue an internal “litigation hold” warning, putting all employees on notice not to delete digital information that could relate to the issues in the potential lawsuit.

Just when corporate officers were beginning to wrap their brains around the idea of preserving vast amounts of emails, spreadsheets, and word processing documents, the duty of preservation expanded to such platforms as social media accounts, voicemails, and text messages. The introduction of wearable devices, AR interfaces and v-discovery will expand the burdens of preservation yet another order of magnitude. There will come a time in the near future when companies will need to catalog, or at least query their employees when needed on, the types of wearable devices they use and the data those devices accumulate. When one employee sues over stressful or discriminatory workplace conditions, for example, it may become necessary to collect the health-monitoring data of each employee in the office to establish the average level of stress and the factors that tended to increase it. If some individuals delete such information about themselves during the relevant timeframe, however, the company could find itself sanctioned for destroying evidence.

These are just a few of the potential issues; rest assured, there will be others. But it all comes with a silver lining. Just a few minutes contemplating the complexities of virtual (or “v-”) discovery makes the current fuss over e-discovery seem not so bad after all.

ASSISTING LAWYERS WITH LEGAL RESEARCH

At a 2014 legal technology conference at Harvard Law School, Wayne Weibel presented his own customized “citation extraction” software for Google Glass.71 Although public details on the project are scarce, it appears to recognize legal text that the person wearing Glass is looking at and find the case law citations in the document. From there, the software could presumably look up and display the case being cited. More advanced versions might even detect the name of a case when spoken in the courtroom, and provide the wearer with instant intelligence on the cited opinion.

Of course, finding a legal opinion only tells half the story. Lawyers regul

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值