Brad Barket / Getty Images
Last month, Jordan Belamire unwittingly — and unwillingly — found herself the first public victim of a new kind of abuse. While visiting her brother-in-law, she tried out the new head-mounted virtual reality system HTC Vive — specifically, a multiplayer archery game called QuiVR.
"I was hanging out next to BigBro442, waiting for our next attack," Belamire, which is a pseudonym, wrote in a now widely shared blog post. "Suddenly, BigBro442’s disembodied helmet faced me dead-on. His floating hand approached my body, and he started to virtually rub my chest."
Belamire's experience raised a dreadful prospect: That the connected spaces in the booming field of virtual reality will suffer the same plague of anonymous harassment and abuse that has come to define the social internet in 2016.
Or, worse. The story suggested that anonymous abuse, in the context of a medium defined by the suspension of disbelief, would take on new and frightening contours.
Wrote Belamire,
"It felt real, violating. This sounds ludicrous to anyone who hasn’t stood on that virtual reality ledge and looked down, but if you have, you might start to understand. The public virtual chasing and groping happened a full week ago and I’m still thinking about it."
For virtual- and augmented-reality evangelists, who have long touted the potential therapeutic benefits of immersive media, the incident was a bracing reminder that new technology is never immune to old problems. And behind that fact looms a serious question: Who will be held accountable for traumatic experiences caused by abuse in virtual reality?
The short answer: Probably not the corporations that make the hardware or the software.
"VR providers will likely face no liability whatsoever, period," said Michael Risch, a law professor at Villanova University who has published widely on legal issues surrounding the technology.
That's because VR providers will be largely shielded by Section 230 of the Communications Decency Act, which says that providers of internet computer services are not responsible for content that comes from users or other providers. It's the same reason Twitter isn't liable for the tsunami of hate speech and harassment on its platform.
"So long as the providers themselves are not doing the harassing, they don’t have to do a thing," Risch told BuzzFeed News.
But, Risch said, they could be liable if they created the content themselves: Say, a virtual bot programmed to harass, or, in a slightly more far-flung hypothetical, a VR game that involved a rape or an assault. In this case, according to Risch, a traumatized person would be able to file an emotional distress suit against the company that wrote the code.
Still, such a plaintiff would have to convince a judge that a traumatic VR experience isn't protected speech in the same way as a novel, a movie, or a traditional video game, which would depend on making a difficult argument about an ineffable technological difference.
"To the extent that there will be law imposed in virtual worlds, it will have to be imposed and enforced by the providers," Risch said. In other words, bad and harmful behavior in VR will be governed in the same way it is on a monitor: Through codes of conduct and terms of service, and whatever other rights corporations reserve to maintain decency and order within their products.
But that's one of the problems with the burgeoning VR industry: There are so many small developers and startups in various stages of funding and organization that the consistent enforcement of behavior standards seems impossible. QuiVr, for example, was in a pre-release alpha and was developed by two people.
"There was no mechanism in place to safeguard against the deplorables," said Miles Perkins, the vice president for marketing communications at Jaunt, a VR startup that has raised more than $100 million from Disney and other investors.
Not that many of the major corporations in VR are eager to lead the way and talk about the steps they'll take to combat abuse. Oculus — the Facebook-owned VR leader whose founder, Palmer Luckey, secretly funded an alt-right, pro-Trump non-profit — did not respond to requests for comment. Microsoft, Magic Touch, and WeVr, all major players in VR and AR, all also declined to talk to BuzzFeed News for the story.
HTC, which co-developed the Vive headset with the gaming services giant Valve, said in a statement, "Unfortunately, this behavior exists in the real world as well as various social platforms. We support content developers to create proper tools to prevent this type of behavior, and ensure people have a safe and trustable experience in VR."
Perkins, the Jaunt vice president, pointed to the way Riot Games has curbed abuse in its popular League of Legends video game. That success depended on a heavily invested community with easy access to reporting tools and quick responses from Riot, as well as punishments (account suspensions and bans) that actually bothered abusers. Whether most VR experiences will be able to match those conditions is unclear.
"I think the responsibility lies in providing those mechanisms," Perkins said. "If something bad happens, people should be locked out or held accountable for what they’re doing."
For the time being, that may be the most VR providers can or will do, though it seems unlikely such steps would have prevented Jordan Belamire from being virtually groped. BuzzFeed News was unable to reach Belamire for comment; she recently deleted her Twitter account. Some have speculated she did so because of harassment.
from BuzzFeed - Tech http://ift.tt/2fjX7Yc
via IFTTT
Hiç yorum yok:
Yorum Gönder