- songs about trust
- what is freedom
- warby parker reviews
- third amendment to the united states constitution
- do interns get paid
- john bolton military service
- trust songs
- cursive m
- traffic lawyer
- american credit union
- is hellofresh healthy
- john bolton draft dodger
- leukemia tatto
- pretty litter reviews
- how to fight a speeding ticket
- freest countries
- most free countries
- did john bolton serve in the military
- 4th of july movies
- sherronda j. brown
With Donald Trump preparing to crack down on social media, Mark Zuckerberg is echoing Trump's sentiments
Last week two of Donald Trump's tweets attacking mail-in voting were flagged by Twitter as inaccurate, with a link to clarifying information.
Predictably, President Trump did not take the note well and is now preparing to sign an executive order with the purpose of cracking down on social media companies. In a move that strikes at the very foundation of the Internet, the new order will seek to give the federal government authority over how these platforms moderate user content.
Zoom offers us the opportunity to connect with each other, but at a price.
Digital platforms are becoming vehicles for new ways of life during quarantine, offering new methods of loving and working and connecting with people who we can no longer see in person.
In particular, the video platform known as Zoom is quickly becoming integral to many of our quarantine routines. But where did Zoom come from, and is it helping us—or hurting us?
Like everything on the Internet, it's most likely doing a little bit of both. And like everything on the Internet, there's a lot going on behind the scenes.
Zoom was founded in 2011 by Eric Yuan. As of 2019, the company is valued at $16 billion. In the era of quarantine, most classes and meetings have been shifted over to its servers. Family dinners, dates, classes, revolution-planning sessions, intimate events—all have been shifted over to the online world.
Zoom is an oddly universal experience for many of us in quarantine, a strange third party in between our solitude and the buzzing outside world we used to know. A whole social culture is cropping up around it, filling the void left by concerts and bars—things like "Zoom Parties" and "Zoom Dates" are becoming common social activities, and as with all phenomena of the digital age, it's generated a fair amount of memes. ("Zoom Memes for Self Quaranteens" is just one of the Facebook groups dedicated to the strange parallel reality that is a Zoom video call).
There are fortunate aspects of Zoom. It's allowed businesses to continue and has kept families connected across continents; it's broadcasted weddings and concerts and rallies. Video conferencing services also vitally allow differently-abled people to participate in events they couldn't have attended otherwise.
But there's a darker undercurrent to all this Zooming. The problem with the Internet is that nothing is ever really private. In some part of our mind, we must know this. We know that everything we post on the Internet will live somewhere forever, impossible to scrub away. We know that the firewalls intended to block hackers are just as thin as a few lines of code, as easy to break as a windowpane. Yet now, many of us have no choice but to burn our data at the altar of the web, and to face the consequences that may come.
Zoom-Bombers Inundate Synagogues, Classrooms, and Meditations with Racist Torments
In March, a Massachusetts teacher was in the middle of a Zoom class when suddenly she found her lesson interrupted by a string of vile profanities. The interloper also shouted her home address. In another Massachusetts-based case, another trespasser invaded a Zoom meeting and displayed swastika tattoos on camera.
Cases of so-called "Zoom-bombing" have grown so common—and the attacks are sometimes so heinous and malicious—that the FBI's Boston Division issued a formal warning on March 30, 2020. "As large numbers of people turn to video-teleconferencing (VTC) platforms to stay connected in the wake of the COVID-19 crisis, reports of VTC hijacking (also called 'Zoom-bombing') are emerging nationwide," their website reads. "The FBI has received multiple reports of conferences being disrupted by pornographic and/or hate images and threatening language."
This phenomenon is certainly not relegated to Massachusetts. "Zoom-bombing" (or invading a Zoom call) may seem like just another form of trolling, but so far, many Zoom-bombers have delivered racist, hateful invectives during their intrusions onto video calls.
Also near the end of March, an online synagogue service was hijacked by racist accounts, who filled the Zoom group chat with "vile abuse" and anti-Semitic sentiments, according to the rabbi.
"It is deeply upsetting that at such a difficult period we are faced with additional challenges like these. We will be keeping the security of our online provision under review through the weeks ahead," stated the rabbi.
According to the Los Angeles Times, some University of Southern California classrooms have been Zoom-bombed with "racist taunts and porn." The stories go on and on. Morgan Elise Johnson's virtual morning meditation sessions were interrupted with graphic sexual material, which devolved into racist taunts when she tried to mute the hackers.
"I just exited out right away," Johnson said. "For it to be at a moment where we were seeking community and seeking collective calm … it really cut through my spirit and affected me in a very visceral way." Johnson's digital media company, the Triibe, has moved its meditations to Instagram live.
You can find plenty of additional examples of Zoom-bombs on TikTok and YouTube, as many gleeful hackers have uploaded footage of themselves interrupting various meetings. And if you're so inclined, you yourself can easily find information about how to hack Zoom calls yourself. On April 2, ZDNet reported that there are now 30 Discord channels related to Zoom hacking, at least three subreddits—two of which have been banned—and many Twitter accounts broadcasting Zoom codes.
If you're tracing Zoom-bombing to its origins, the online text and voice messaging server Discord is the place where it all began. Many of the hackers are less than subtle, often posting requests for hacks. "Can anybody troll my science class at 9 15," wrote one user in one of Discord's forums, according to pcmag.com. Another user linked to a since-deleted YouTube video that showed someone sharing photos of the Ku Klux Klan to an Alcoholics Anonymous Zoom meeting. The Discord forums are also full of information about hacking Facebook livestreams and Google Hangouts calls.
While many online hackers are seeking lucrative gains like credit card numbers, Zoom hackers seem to be motivated purely by a desire for mischief and chaos–as well as racism. Essentially, they're Internet trolls, AKA the scourge of the virtual earth. But the maliciousness of these racist attacks, which are hate speech through-and-through, can't be underestimated or taken lightly.
How to Protect Your Zoom Account
There are ways to protect yourself against the trolls. pcmag.com recommends creating your own individual meeting ID rather than using the one Zoom assigns to you. They also recommend creating a waiting room so the meeting organizer can choose who to let in, an option that can be found under "account settings," and requiring a password when scheduling new meetings, which can also be found in the account settings under "Schedule Meeting." When all participants arrive, you can "lock" the meeting by clicking "Participants" at the bottom of the screen window.
In addition, The Verge recommends that users disable the screen-sharing feature. Other sites advise turning "attention tracking" off—which can prevent call organizers from seeing whether you're looking at other tabs during your meeting—and you can use a virtual background to disguise your living space.
The Anti-Defamation League has synthesized all this into a list of handy tips, all of which are explained in detail on its website.
Before the meeting:
- — Disable autosaving chats
- — Disable file transfer
- — Disable screen sharing for non-hosts
- — Disable remote control
- — Disable annotations
- — Use per-meeting ID, not personal ID
- — Disable "Join Before Host"
- — Enable "Waiting Room"
During the meeting:
- — Assign at least two co-hosts
- — Mute all participants
- — Lock the meeting, if all attendees are present
If you are Zoom bombed:
- — Remove problematic users and disable their ability to rejoin when asked
- — Lock the meeting to prevent additional Zoom bombing
Zoom itself has faced scrutiny and questions for quite a while, and its problems extend beyond a vulnerability to hacks. "Things you just would like to have in a chat and video application — strong encryption, strong privacy controls, strong security — just seem to be completely missing," said Patrick Wardle, a security researcher who previously worked at the National Security Agency, per NPR. Wardle discovered a flaw in Zoom's code that could allow hackers to watch video participants through a webcam, a problem Zoom says it has fixed.
And of course, like most big tech companies, Zoom is in cahoots to the evil mastermind behind tech's greatest data suck—Facebook.
Zoom Has Been Quietly Sharing Data with Facebook
Many websites use Facebook's software to enhance and facilitate their own programs. For its part, Zoom immediately connects to Facebook's Graph API app the moment it opens, which immediately places data in the hands of Facebook's developers, according to Motherboard. The app lets Facebook know the time, date, and locations of every zoom call and additional device information. The problem is that Zoom never asked its users for their permission to sell data to Mark Zuckerberg.
Following the Motherboard report, a California Zoom user sued the company for failing to "properly safeguard the personal information of the increasing millions of users," arguing that Zoom is violating California's Unfair Competition Law, Consumers Legal Remedies Act and the Consumer Privacy Act. "The unique advertising identifier allows companies to target the user with advertisements," reads the lawsuit. "This information is sent to Facebook by Zoom regardless of whether the user has an account with Facebook."
In response, Zoom CEO Eric Yuan published a blog post in which he stated, "Our customers' privacy is incredibly important to us, and therefore we decided to remove the Facebook SDK in our iOS client and have reconfigured the feature so that users will still be able to log in with Facebook via their browser." Still, users are questioning whether one CEO's word is enough.
There are other eerie realities about Zoom. For example, call organizers have more power than we think. Using certain settings (which can all be turned off), hosts can read their call participants' messages and can see whether attendees clicked away from the call.
Based on its own technical white paper, Zoom falsely marketed one of its features as making meetings "end-to-end en… https://t.co/Lh1KLYBidT— WIRED (@WIRED)1585936146.0
The contents of thousands of video calls made on the app Zoom were exposed on the open web, and easily available vi… https://t.co/5s1QYRanG2— Xeni Jardin, #stayathome (@Xeni Jardin, #stayathome)1585933927.0
Certainly in the future, more dark truths will emerge about Zoom. For now, many tech companies are advising users not to say or do anything on Zoom that they wouldn't want to be broadcast to the public. Of course, Zoom is probably no more or less safe than the rest of the Internet, a thought that could be comforting or nightmarish depending on what you've been up to online.
A Much Larger Problem: The Internet Is Not a Safe Space
The prospect of a Zoom hack or data breach is scary, and it's absolutely not what any of us want to worry about during these unstable quarantined times. Yet it also reveals a dark truth about the Internet: nothing is safe online—and pretty much everything we post is being used to sell us things. Unless you're a super-famous person or the unlucky target of a bored troll, you should be less afraid that someone will go through your Google search history and more afraid of what's happening to all the times you put in your home address and email on a website's pop-up questionnaire.
Zoom's encryption may be shoddy, but it's certainly not the only site that's selling your data to Google, Facebook, and their cohort of advertising companies. In fact, every ad you see is based on your web search history, demographics, location, and other factors individual to you.
Data—which includes but is not limited to the numbers, emails, and information you put in on all those online forms—is one of the most valuable commodities of the Internet age. Every time we do anything online, or carry our phones anywhere, someone is collecting data, and typically that data is filtered into massive data-collection artificial intelligence programs that synthesize data and use it to sell products.
"We are already becoming tiny chips inside a giant system that nobody really understands," writes historian Yuval Noah Harari. "Every day I absorb countless data bits through emails, phone calls and articles; process the data; and transmit back new bits through more emails, phone calls and articles."
And what will happen to all this data? It's hard to say—but conspiracies are thriving. "Dataists believe in the invisible hand of the dataflow. As the global data-processing system becomes all-knowing and all-powerful, so connecting to the system becomes the source of all meaning," adds Harari. "Dataists further believe that given enough biometric data and computing power, this all-encompassing system could understand humans much better than we understand ourselves. Once that happens, humans will lose their authority, and humanist practices such as democratic elections will become as obsolete as rain dances and flint knives."
That might be extreme, but perhaps not. During this coronavirus quarantine, we will all be pouring much more information onto the Internet than ever before, providing the Internet with intimate knowledge of our every move—more than enough to create pretty accurate digital representations of our own minds. Whoever finds out the best way to harness and sell this data (be it an AI program, a human, or Elon Musk and Grimes' cyborgian child) may just be our new god.
Contrary to popular belief, there is no hate speech exception to the First Amendment.
The social networking site Gab has been taken offline since it was confirmed that the Pittsburgh synagogue gunman used it to post anti-Semitic hate speech and to threaten Jews. The site is popular with the far right and describes itself as "an ad-free social network for creators who believe in free speech, individual liberty, and the free flow of information online. All are welcome." Gab was originally created by conservative businessman Andrew Torba in response to Twitter clamping down on hate speech in 2016.
Robert Bowers logged onto the platform shortly before killing 11 people at the Tree of Life synagogue on Saturday to post the following.
Consequently, the site has been abandoned by payment processing firms PayPal and Stripe, as well as hosting service Joyent and domain register GoDaddy. A statement on Gab's website Monday read that the platform would be "inaccessible for a period of time" as it switches to a new web host. It said the issue was being worked on "around the clock." The statement went on to defend the website, saying, "We have been systematically no-platformed [and] smeared by the mainstream media for defending free expression and individual liberty for all people."
Regarding Bowers' use of the site, Torba said, "Because he was on Gab, law enforcement now have definitive evidence for a motive," Mr. Torba wrote. "They would not have had this evidence without Gab. We are proud to work with and support law enforcement in order to bring justice to this alleged terrorist."
But companies associated with Gab were not satisfied by the site's cooperation with law enforcement and continue to abandon the site. PayPal, the platform Gab used to manage donations from users, said in a statement, "When a site is explicitly allowing the perpetuation of hate, violence or discriminatory intolerance, we take immediate and decisive action."
A tweet from Gab on Monday morning implied that the people behind the site believe themselves to be a victim of intentional defamation.
Set aside the questionable intent of the decidedly tone-deaf tweet; and, legally, Gab did not do anything wrong. Contrary to popular belief, there is no hate speech exception to the First Amendment. The Supreme Court reaffirmed this in 2017 in Matal vs. Tal, deciding, "Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful...the proudest boast of our free speech jurisprudence is that we protect the freedom to express 'the thought that we hate.'" Despite this, many people are calling for the permanent removal of the site, as Wired points out, "Momentary political rage can blind people into abandoning sacred values."
However, the internet inarguably contributes to the creation of extremists, as we have seen in the case of terrorists, rapists, school shooters, and now the synagogue shooter in Pittsburgh. Sites like Gab allows users to easily find other people who share their most extreme viewpoints, inevitably normalizing disturbing rhetoric the user may have otherwise suppressed or self-corrected in time. Therefore, sites like Gab become polarizing spaces that can help to sew the kinds of ideas that lead to violent acts. But, if there's no legal action to be taken against a site like Gab without damaging free speech, what can be done?
Justice Anthony Kennedy said in his opinion following Matal vs. Tal, "A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. The First Amendment does not entrust that power to the government's benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society."
While what exactly those safeguards are remains unclear, one can speculate that what Kennedy meant is exactly what Gab calling unjust now. As previously mentioned, the site has been abandoned by all of the companies whose services were needed for the site to remain online. And just as Gab has the right to allow freedom of expression on their site as they see fit, these companies are also free to express themselves in refusing to work with websites that allow hateful rhetoric.
Indeed, the conversation surrounding the fate of Gab has revealed that freedom of speech online is not decided by the government, but by social media platforms, servers, and domain registers who are free to decide with what kind of opinion their company wants to be associated. This also means that, on some level, what is seen as acceptable online is driven by consumer outrage and approval.
For example, after facing criticism for allowing users to post prejudiced content, larger social networking sites like Twitter and Facebook have been actively fighting against hateful rhetoric with varying degrees of success. In 2016, a code of conduct was established by the European Union in collaboration with Facebook, Twitter, YouTube, and Microsoft. The code is aimed at fighting racism and xenophobia and encourages the social media companies to remove hate speech from their platforms.
So, instead of outraged Americans calling for the legal suppression of sites like Gab — an impossibility if the First Amendment is to remain intact — the real power of the individual to fight hate speech is in one's ability to support or boycott companies based on how they handle free expression.
Brooke Ivey Johnson is a Brooklyn based writer, playwright, and human woman. To read more of her work visit her blog or follow her twitter @BrookeIJohnson.
700,000 Muslims were forced to flee to neighboring Bangladesh in 2017.
On Monday, Facebook said it removed 13 pages and 10 accounts controlled by the Myanmar military in connection with the Rohingya refugee crisis.
The accounts were masquerading as independent entertainment, beauty, and information pages, such as Burmese popstars, wounded war heroes, and "Young Female Teachers." Fake postings reached 1.35 million followers, spreading anti-Muslim messages to social media users across the Buddhist-majority country.
Facebook's move comes a year after 700,000 Rohingya, a Muslim minority group in Myanmar, were forced to flee to neighboring Bangladesh amid widely-documented acts of mob violence and rape perpetrated by Myanmar soldiers and Buddhist mobs. The United Nations Human Rights Council denounced the crisis as "a textbook case of ethnic cleansing and possibly even genocide."
Rohingya children rummaging through the ruins of a village market that was set on fire.Reuters
Last month, the social media giant announced a similar purge, removing Facebook and Instagram accounts followed by a whopping 12 million users. Senior General Min Aung Hlaing, commander-in-chief of the Myanmar armed forces, was banned from the platform, as was the military's Myawady television network.
Over the last few years, Facebook has been in the hot seat for their tendency to spread misinformation. In the 2016 U.S. presidential election, inauthentic Facebook accounts run by Russian hackers created 80,000 posts that reached 126 million Americans through liking, sharing, and following. This problem has persisted in the 2018 midterm elections, ahead of which 559 pages were removed that broke the company's policies against spreading spam and coordinated influence efforts. Recent campaigns originating in Iran and Russia target not only the U.S., but also Latin America, the U.K., and the Middle East.
The situation in Myanmar is particularly troubling—it's not an effort by foreign powers to stoke hate and prejudice in a rival, but rather an authoritarian government using social media to control its own people. According to the New York Times, the military Facebook operation began several years ago with as many as 700 people working on the project.
Screen shots from the account of the Myanmar Senior General Min Aung Hlaing, whose pages were removed in August.
Claiming to show evidence of conflict in Myanmar's Rakhine State in the 1940s, the images are in fact from Bangladesh's war for independence from Pakistan in 1971.
Fake pages of pop stars and national heroes would be used to distribute shocking photos, false stories, and provocative posts aimed at the country's Muslim population. They often posted photos of corpses from made-up massacres committed by the Rohingya, or spread rumors about people who were potential threats to the government, such as Nobel laureate Daw Aung San Suu Kyi, to hurt their credibility. On the anniversary of September 11, 2001, fake news sites and celebrity fan pages sent warnings through Facebook Messenger to both Muslim and Buddhist groups that an attack from the other side was impending.
Facebook admitted to being "too slow to prevent misinformation and hate" on its sites. To prevent misuse in the future, they plan on investing heavily in artificial intelligence to proactively flag abusive posts, making reporting tools easier and more intuitive for users, and continuing education campaigns in Myanmar to introduce tips on recognizing false news.
The company called the work they are doing to identify and remove the misleading network of accounts in the country as "some of the most important work being done [here]."
Sites like Facebook will have more and more influence over our elections in the future.
America's favorite uncorroborated news story of the moment is that the Russian government masterminded Trump's rise to power. It's easy to understand why. Introspection after a loss is difficult, and rather than face themselves, the DNC decided to have a seance, evoking a Cold War ghost to explain their defeat. It's somewhat comforting to assume an international conspiracy was behind the Hillary Clinton's failure in the 2016 election. It absolves the DNC of any responsibility to change their conduct or adjust their political strategy. That said, there is no hard evidence of collusion, but rather a string of awkward encounters by Trump's largely inexperienced, and frankly stupid, staff. The meat of Russia's "interference" came in the form of social media bots, fake accounts that would automatically repost sensationalist headlines to drum up support for Trump. These accounts are pretty easy to spot however, as they don't even come close to passing a turing test.
Even though some Millennials are almost forty, people are still bashing them.
Last year the New York Post ran an article about Millennials making up the largest portion of the American workforce, ignoring a glaringly obvious point: of course 22-37 year olds are the largest portion of the labor market; they're adults. In an effort to make a distinctly un-newsworthy article newsworthy, the Post settled on an old trope, pick on the Millennials. For its part, this article wasn't as bad as most. The author refrained from using words like "entitled" and "coddled" and "irresponsible," but there's still a certain connotation attached to the term Millennial, particularly in the way it pertains to work ethic and maturity. Repudiating a stereotype often doesn't have the desired effect; in fact, it has a tendency of validating the stereotyper.* That said, my editor's asked me to dissect the maelstrom of insults and unfair generalizations that surround my generation, so here it goes.
Are you interacting with a real person, or an automated program? Sometimes, it's hard to tell
For years, science fiction writers have been telling us robots are going to take over the world. It turns out they were right.
But, it's we humans who are doing the androids' dirty work. Unless you've been living in a cabin deep in the woods without the internet (and if so, do you have an extra bunk?), you are probably familiar with the scourge of "Bots," even if you don't recognize the invasion. Bots, short for "robots," are automated programs that run over the internet. On social media, bots have made their presence felt through a wave of fake accounts posing as real people, some 48 million on Twitter alone.