Zoom offers us the opportunity to connect with each other, but at a price.
Digital platforms are becoming vehicles for new ways of life during quarantine, offering new methods of loving and working and connecting with people who we can no longer see in person.
In particular, the video platform known as Zoom is quickly becoming integral to many of our quarantine routines. But where did Zoom come from, and is it helping us—or hurting us?
Like everything on the Internet, it's most likely doing a little bit of both. And like everything on the Internet, there's a lot going on behind the scenes.
Zoom was founded in 2011 by Eric Yuan. As of 2019, the company is valued at $16 billion. In the era of quarantine, most classes and meetings have been shifted over to its servers. Family dinners, dates, classes, revolution-planning sessions, intimate events—all have been shifted over to the online world.
Zoom is an oddly universal experience for many of us in quarantine, a strange third party in between our solitude and the buzzing outside world we used to know. A whole social culture is cropping up around it, filling the void left by concerts and bars—things like "Zoom Parties" and "Zoom Dates" are becoming common social activities, and as with all phenomena of the digital age, it's generated a fair amount of memes. ("Zoom Memes for Self Quaranteens" is just one of the Facebook groups dedicated to the strange parallel reality that is a Zoom video call).
There are fortunate aspects of Zoom. It's allowed businesses to continue and has kept families connected across continents; it's broadcasted weddings and concerts and rallies. Video conferencing services also vitally allow differently-abled people to participate in events they couldn't have attended otherwise.
But there's a darker undercurrent to all this Zooming. The problem with the Internet is that nothing is ever really private. In some part of our mind, we must know this. We know that everything we post on the Internet will live somewhere forever, impossible to scrub away. We know that the firewalls intended to block hackers are just as thin as a few lines of code, as easy to break as a windowpane. Yet now, many of us have no choice but to burn our data at the altar of the web, and to face the consequences that may come.
Zoom-Bombers Inundate Synagogues, Classrooms, and Meditations with Racist Torments
In March, a Massachusetts teacher was in the middle of a Zoom class when suddenly she found her lesson interrupted by a string of vile profanities. The interloper also shouted her home address. In another Massachusetts-based case, another trespasser invaded a Zoom meeting and displayed swastika tattoos on camera.
Cases of so-called "Zoom-bombing" have grown so common—and the attacks are sometimes so heinous and malicious—that the FBI's Boston Division issued a formal warning on March 30, 2020. "As large numbers of people turn to video-teleconferencing (VTC) platforms to stay connected in the wake of the COVID-19 crisis, reports of VTC hijacking (also called 'Zoom-bombing') are emerging nationwide," their website reads. "The FBI has received multiple reports of conferences being disrupted by pornographic and/or hate images and threatening language."
This phenomenon is certainly not relegated to Massachusetts. "Zoom-bombing" (or invading a Zoom call) may seem like just another form of trolling, but so far, many Zoom-bombers have delivered racist, hateful invectives during their intrusions onto video calls.
Also near the end of March, an online synagogue service was hijacked by racist accounts, who filled the Zoom group chat with "vile abuse" and anti-Semitic sentiments, according to the rabbi.
"It is deeply upsetting that at such a difficult period we are faced with additional challenges like these. We will be keeping the security of our online provision under review through the weeks ahead," stated the rabbi.
According to the Los Angeles Times, some University of Southern California classrooms have been Zoom-bombed with "racist taunts and porn." The stories go on and on. Morgan Elise Johnson's virtual morning meditation sessions were interrupted with graphic sexual material, which devolved into racist taunts when she tried to mute the hackers.
"I just exited out right away," Johnson said. "For it to be at a moment where we were seeking community and seeking collective calm … it really cut through my spirit and affected me in a very visceral way." Johnson's digital media company, the Triibe, has moved its meditations to Instagram live.
You can find plenty of additional examples of Zoom-bombs on TikTok and YouTube, as many gleeful hackers have uploaded footage of themselves interrupting various meetings. And if you're so inclined, you yourself can easily find information about how to hack Zoom calls yourself. On April 2, ZDNet reported that there are now 30 Discord channels related to Zoom hacking, at least three subreddits—two of which have been banned—and many Twitter accounts broadcasting Zoom codes.
If you're tracing Zoom-bombing to its origins, the online text and voice messaging server Discord is the place where it all began. Many of the hackers are less than subtle, often posting requests for hacks. "Can anybody troll my science class at 9 15," wrote one user in one of Discord's forums, according to pcmag.com. Another user linked to a since-deleted YouTube video that showed someone sharing photos of the Ku Klux Klan to an Alcoholics Anonymous Zoom meeting. The Discord forums are also full of information about hacking Facebook livestreams and Google Hangouts calls.
While many online hackers are seeking lucrative gains like credit card numbers, Zoom hackers seem to be motivated purely by a desire for mischief and chaos–as well as racism. Essentially, they're Internet trolls, AKA the scourge of the virtual earth. But the maliciousness of these racist attacks, which are hate speech through-and-through, can't be underestimated or taken lightly.
How to Protect Your Zoom Account
There are ways to protect yourself against the trolls. pcmag.com recommends creating your own individual meeting ID rather than using the one Zoom assigns to you. They also recommend creating a waiting room so the meeting organizer can choose who to let in, an option that can be found under "account settings," and requiring a password when scheduling new meetings, which can also be found in the account settings under "Schedule Meeting." When all participants arrive, you can "lock" the meeting by clicking "Participants" at the bottom of the screen window.
In addition, The Verge recommends that users disable the screen-sharing feature. Other sites advise turning "attention tracking" off—which can prevent call organizers from seeing whether you're looking at other tabs during your meeting—and you can use a virtual background to disguise your living space.
The Anti-Defamation League has synthesized all this into a list of handy tips, all of which are explained in detail on its website.
Before the meeting:
- — Disable autosaving chats
- — Disable file transfer
- — Disable screen sharing for non-hosts
- — Disable remote control
- — Disable annotations
- — Use per-meeting ID, not personal ID
- — Disable "Join Before Host"
- — Enable "Waiting Room"
During the meeting:
- — Assign at least two co-hosts
- — Mute all participants
- — Lock the meeting, if all attendees are present
If you are Zoom bombed:
- — Remove problematic users and disable their ability to rejoin when asked
- — Lock the meeting to prevent additional Zoom bombing
Zoom itself has faced scrutiny and questions for quite a while, and its problems extend beyond a vulnerability to hacks. "Things you just would like to have in a chat and video application — strong encryption, strong privacy controls, strong security — just seem to be completely missing," said Patrick Wardle, a security researcher who previously worked at the National Security Agency, per NPR. Wardle discovered a flaw in Zoom's code that could allow hackers to watch video participants through a webcam, a problem Zoom says it has fixed.
And of course, like most big tech companies, Zoom is in cahoots to the evil mastermind behind tech's greatest data suck—Facebook.
Zoom Has Been Quietly Sharing Data with Facebook
Many websites use Facebook's software to enhance and facilitate their own programs. For its part, Zoom immediately connects to Facebook's Graph API app the moment it opens, which immediately places data in the hands of Facebook's developers, according to Motherboard. The app lets Facebook know the time, date, and locations of every zoom call and additional device information. The problem is that Zoom never asked its users for their permission to sell data to Mark Zuckerberg.
Following the Motherboard report, a California Zoom user sued the company for failing to "properly safeguard the personal information of the increasing millions of users," arguing that Zoom is violating California's Unfair Competition Law, Consumers Legal Remedies Act and the Consumer Privacy Act. "The unique advertising identifier allows companies to target the user with advertisements," reads the lawsuit. "This information is sent to Facebook by Zoom regardless of whether the user has an account with Facebook."
In response, Zoom CEO Eric Yuan published a blog post in which he stated, "Our customers' privacy is incredibly important to us, and therefore we decided to remove the Facebook SDK in our iOS client and have reconfigured the feature so that users will still be able to log in with Facebook via their browser." Still, users are questioning whether one CEO's word is enough.
There are other eerie realities about Zoom. For example, call organizers have more power than we think. Using certain settings (which can all be turned off), hosts can read their call participants' messages and can see whether attendees clicked away from the call.
Certainly in the future, more dark truths will emerge about Zoom. For now, many tech companies are advising users not to say or do anything on Zoom that they wouldn't want to be broadcast to the public. Of course, Zoom is probably no more or less safe than the rest of the Internet, a thought that could be comforting or nightmarish depending on what you've been up to online.
A Much Larger Problem: The Internet Is Not a Safe Space
The prospect of a Zoom hack or data breach is scary, and it's absolutely not what any of us want to worry about during these unstable quarantined times. Yet it also reveals a dark truth about the Internet: nothing is safe online—and pretty much everything we post is being used to sell us things. Unless you're a super-famous person or the unlucky target of a bored troll, you should be less afraid that someone will go through your Google search history and more afraid of what's happening to all the times you put in your home address and email on a website's pop-up questionnaire.
Zoom's encryption may be shoddy, but it's certainly not the only site that's selling your data to Google, Facebook, and their cohort of advertising companies. In fact, every ad you see is based on your web search history, demographics, location, and other factors individual to you.
Data—which includes but is not limited to the numbers, emails, and information you put in on all those online forms—is one of the most valuable commodities of the Internet age. Every time we do anything online, or carry our phones anywhere, someone is collecting data, and typically that data is filtered into massive data-collection artificial intelligence programs that synthesize data and use it to sell products.
"We are already becoming tiny chips inside a giant system that nobody really understands," writes historian Yuval Noah Harari. "Every day I absorb countless data bits through emails, phone calls and articles; process the data; and transmit back new bits through more emails, phone calls and articles."
And what will happen to all this data? It's hard to say—but conspiracies are thriving. "Dataists believe in the invisible hand of the dataflow. As the global data-processing system becomes all-knowing and all-powerful, so connecting to the system becomes the source of all meaning," adds Harari. "Dataists further believe that given enough biometric data and computing power, this all-encompassing system could understand humans much better than we understand ourselves. Once that happens, humans will lose their authority, and humanist practices such as democratic elections will become as obsolete as rain dances and flint knives."
That might be extreme, but perhaps not. During this coronavirus quarantine, we will all be pouring much more information onto the Internet than ever before, providing the Internet with intimate knowledge of our every move—more than enough to create pretty accurate digital representations of our own minds. Whoever finds out the best way to harness and sell this data (be it an AI program, a human, or Elon Musk and Grimes' cyborgian child) may just be our new god.
The GPT-2 software can generate fake news articles on its own. Its creators believe its existence may pose an existential threat to humanity. But it could also present a chance to intervene.
Researchers at OpenAI have created an artificial intelligence software so powerful that they have deemed it too dangerous for public release.
The software, called GPT-2, can generate cohesive, coherent text in multiple genres—including fiction, news, and unfiltered Internet rants—making it a prime candidate for creating fake news or fake profiles should it fall into the wrong hands.
Fears like this led the Elon Musk-founded company OpenAI to curtail the software's release. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," they announced in a blog post. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."
In addition to writing a cohesive fictional story based on Lord of the Rings, the software wrote a logical scientific report about the discovery of unicorns. "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains," the software wrote. "Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science."
This journalistic aptitude sparked widespread fears that AI technologies as sophisticated as the GPT-2 could influence upcoming elections, potentially generating unfathomable amounts of partisan content in a single instant. "The idea here is you can use some of these tools in order to skew reality in your favor," said University of Washington professor Ryan Calo. "And I think that's what OpenAI worries about."
Elon Musk quit OpenAI in 2018, but his legacy of fear and paranoia regarding AI and its potential evils lives on. The specter of his caution was likely instrumental in keeping GPT-2 out of the public sphere. "It's quite uncanny how it behaves," echoed Jack Clark, policy director of OpenAI, when asked about his decision to keep the new software under locks.
The fears of Elon MuskImage via express.co.uk
In a world already plagued by fake news, cat-fishing, and other forms of illusion made possible by new technology, AI seems like a natural next step in the dizzying sequence of illusion and corruption that has rapidly turned the online world from a repository of cat videos (the good old days) to today's vortex of ceaselessly reproduced lies and corrupted content. Thinkers like Musk have long called for resistance against AI's unstoppable growth. In 2014, Musk called AI the single largest "existential threat" to humanity. That same year, the late physicist Stephen Hawking ominously predicted that sophisticated AI could "spell the end of the human race."
Stephen Hawking's apocalyptic visionsImage via longroom.com
But until AI achieves the singularity—a level of consciousness where it achieves and supersedes human intelligence—it is still privy to the whims of whoever is controlling it. Fears about whether AI will lend itself to fake news are essentially fears of things humans have already done. All the evil at work on the Internet has had a human source.
When it comes down to the wire, for now, AI is a weapon.
When AI is released into the world, a lot could happen. AI could become a victim, a repository for displaced human desire. Some have questioned whether people should be allowed to treat humanoid creatures in whatever ways they wish to. Instances of robot beheadings and other violent behaviors towards AI hint towards a darker trend that could emerge should AI become a free-for-all, a humanoid object that can be treated in any way on the basis of its presumed inhumanity.
Clearly, AI and humanity have a complex and fundamentally intertwined relationship, and as we all become more dependent on technology, there is less of a clear line dividing the human from the robotic. As a manmade invention, AI will inevitably emulate the traits (as well as the stereotypes) of the people who created it. It could also take on the violent tendencies of its human creators. Some thinkers have sounded the alarm about this, questioning the dearth of ethics in Silicon Valley and in the tech sphere on the whole. Many people believe that AI (and technology in general) is fundamentally free of bias and emotion, but a multitude of examples have shown that this is untrue, including instances where law enforcement software systems displayed racist bias against black people (based on data collected by humans).
AI can be just as prejudiced and close-minded as a human, if not more so, especially in its early stages where it is not sophisticated enough to think critically. An AI may not feel in and of itself, but—much like we learn how to process the world from our parents—it can learn how to process and understand emotions from the people who create it, and from the media it absorbs.
Image via techno-pundit.blogspot.comAfter all, who could forget the TwitterBot who began spewing racist, anti-Semitic rants mere hours after its launch—rants that it, of course, learned from human Twitter users? Studies have estimated that 9 to 15 percent of all Twitter accounts are bots—but each one of these bots had to be created and programmed by a human being. Even if the bot was not created for a specific purpose, it still learns from the human presences around it.
A completely objective, totally nonhuman AI is kind of like the temperature absolute zero; it can exist only in theory. Since all AI is created by humans, it will inevitably take on human traits and beliefs. It will perform acts of evil when instructed to, or when exposed to ideologies that can inspire it to. It can also learn morality if its teachers choose to imbue it with the ability to tell right from wrong.
Image via cio.com
Their quandary may not be so different from the struggle parents face when deciding whether to allow their children to watch R-rated movies. In this case, both the general public and the AIs are the children, and the scientists, coders, and companies peddling new inventions are the parents. The people designing AIs have to determine the extent to which they can trust the public with their work. They also have to determine which aspects of humanity they want to expose their inventions to.
OpenAI may have kept their kid safe inside the house a little longer by freezing the GPT-2, but that kid is growing—and when it goes out into the world, it could change everything. For better or worse, at some point, super-intelligent AI is going to wind up in the public's hands. Now, during its tender, formative stages, there is still a chance to shape it into whom it's going to be when it arrives.
Eden Arielle Gordon is a writer and musician from New York City. Talk to her about AI on Twitter @edenarielmusic.
Enough with the Elon Musk hero worship.
A World Economic Forum survey found that Millennials consider Elon Musk to be the third-most admirable public figure in the world, finally confirming, once and for all, that Millenials are the worst.
If you haven't read Rolling Stone's bleak profile of Musk yet, don't bother. The experience is kind of like watching a mashup of all the sad parts of Spike Jonze's her; entertaining, but you're left feeling like you need a shower. The piece presents readers with an emotionally stunted, socially bizarre man, who finds solace only in the possibilities of technology. I am not implying that I think Elon Musk would fuck a Tesla; but I'm not not.
The super-rich are hoping inequality is here to stay, even after the apocalypse.
With the Atlantic hurricane season already underway, tens of millions of people are preparing grab bags and emergency kits and hoping that the next storm isn't the one that will take away their lives, their homes, or their resources. Yet, in spite of researchers' warnings suggesting that global climate change is increasing the likelihood that the next big storm, or the one after that, will wreak unavoidable devastation on those same millions, a much smaller group have no such anxieties. These people are not members of a doomsday cult, climate change 'skeptic' Super-PAC, or owners of exceptionally-developed spleens. They are a part of a far more elite class of mammals –– the super-rich –– and, as the storms rage ever harder on the rest of us, they've prepared emergency kits that have far more than a flashlight and a radio in them.
The press is under siege, but it hasn't stopped great journalists from doing their job.
Last week, Luke O'Brien, a writer for the Huffington Post, penned an article which revealed the person behind the Twitter handle @AmyMek (real name: Amy Jane Mekelberg), an account dedicated to peddling racist garbage predominantly aimed at Muslims. O'Brien's story itself quickly gets muddled in minutia (albeit interesting minutia), revealing various details about Mekelberg's life. As one can imagine, the article was met with scorn from Mekelberg's many fans, but things took a distinctly violent turn over the past few days, with O'Brien receiving death threats online. Over the course of Donald Trump's presidency, rhetoric against the mainstream press has become increasingly malignant.