Trending

Hating Robocalls, and Other Things that All Americans Can Agree On

We live in a divided nation—but there some things will always bind us together.

Very few people seem to be getting along in America right now. Countless relationships have ended, and families have broken apart because of political and ideological differences, which have only grown more extreme following the 2016 election. The divide between Democrats and Republicans, pro-lifers and pro-choicers, climate-change deniers and believers, and many more have become unfathomably vast.

Image via the Seattle Times

But amidst all the chaos, violence and noise, there are just some issues that are decidedly non-partisan; some topics that are so unanimously agreed on that for a moment, it almost seems like we're all only human. In a time of rage, here are the few points of commonality we have.

1. Robocalls Should Stop Forever

There are so many contentious issues being debated in Congress today—from the Green New Deal to bathrooms to anything even remotely connected to the president; it's safe to say that there are very few things everyone in the House and Senate agree upon. But recently, two bills were introduced in the spirit of stopping robocalls—those awful telemarketer messages that constantly interrupt our day with health insurance scams or calls from the Chinese consulate—forever. One is the proposal Stopping Bad Robocalls, from Senator Frank Pallone of New Jersey. The other is Massachusetts Senator Ed Markey's Telephone Robocall Criminal Abuse Enforcement and Deterrence Act. Both of these proposals will make it much harder for telemarketers to call and force their wills upon unsuspecting constituents. According to Markey, "If this bill can't pass, no bill can pass."


AI support centreImage via Ars Technica

2. Voting is Important

Now, though the issue of who to vote for is one of the easiest ways to turn an ordinary Thanksgiving dinner into a full-on screamfest, most Americans do agree that as citizens of this country, we are responsible for performing our civic duty and making our political opinions heard. Starting way back with the Founding Fathers, this has been an American ideal that nobody except for the staunchest anarchists or most apathetic among us is resistant to. Even so, only around 58.1% of America's voting-eligible population voted in 2016, although 67% of Americans believe that not voting is a huge problem, according to a survey by the Public Religion Research Institute. Maybe the disparity lies in the fact that the people who do not believe in voting also probably wouldn't be too likely to respond to a random political survey.

3. The News Is Fake

No matter where you prefer to get your news, most Americans agree that the media has serious issues—namely the abundance of falsified information plaguing and distorting everything from our elections to our dating lives. The issue isn't only a problem among journalists; politicians themselves are also widely distrusted, and for a good reason. In 2010, Senator Jim McMinn proclaimed that 94% of bills in Congress are passed without issue (it was found to be about 27.4%—although who knows if that statistic is true, though it did come from a Pulitzer-prize-winning political fact-checking organization). Since then, things have spiraled more and more out of control. There's no legitimate way to check how much fake news is out there, but according to one survey, most viewers were suspicious of 80% of the news they saw on social media and 60% of what they saw online overall. Though if you're like the majority of Americans, you won't be taking this article's word for it.

Image via Vox

4. We Should Have Healthcare

Although there is certainly not a clear consensus, most Americans do support healthcare for all. According to a 2018 poll, 6 out of 10 Americans believe that the government should provide healthcare for everyone; another survey from The Hill found that 70% of Americans support Medicare for all, and even a small majority of Republicans are in favor of the idea.

5. The Nation Is Divided

We can all agree on one thing: disagreeing. 81% of Americans believe that we are more divided than at any other time in our nation's history, according to Time. (Remember, there was this thing called the Civil War). Americans can't even agree on what exactly the nation's most significant points of disagreement are: most Democrats believe gun control is a huge issue while most Republicans consider it unimportant; same with climate change and income equality, according to surveys from the Pew Institute.

Although contention and chaos might be the laws of the day, at least we'll always have a shared hatred of telemarketers to bind us all together.


Eden Arielle Gordon is a writer and musician from New York City.

Researchers Have Created an AI Too Dangerous to Release. What Will Happen When It Gets Out?

The GPT-2 software can generate fake news articles on its own. Its creators believe its existence may pose an existential threat to humanity. But it could also present a chance to intervene.

Researchers at OpenAI have created an artificial intelligence software so powerful that they have deemed it too dangerous for public release.

The software, called GPT-2, can generate cohesive, coherent text in multiple genres—including fiction, news, and unfiltered Internet rants—making it a prime candidate for creating fake news or fake profiles should it fall into the wrong hands.

Fears like this led the Elon Musk-founded company OpenAI to curtail the software's release. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," they announced in a blog post. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

In addition to writing a cohesive fictional story based on Lord of the Rings, the software wrote a logical scientific report about the discovery of unicorns. "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains," the software wrote. "Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science."

This journalistic aptitude sparked widespread fears that AI technologies as sophisticated as the GPT-2 could influence upcoming elections, potentially generating unfathomable amounts of partisan content in a single instant. "The idea here is you can use some of these tools in order to skew reality in your favor," said University of Washington professor Ryan Calo. "And I think that's what OpenAI worries about."

Elon Musk quit OpenAI in 2018, but his legacy of fear and paranoia regarding AI and its potential evils lives on. The specter of his caution was likely instrumental in keeping GPT-2 out of the public sphere. "It's quite uncanny how it behaves," echoed Jack Clark, policy director of OpenAI, when asked about his decision to keep the new software under locks.

The fears of Elon MuskImage via express.co.uk

In a world already plagued by fake news, cat-fishing, and other forms of illusion made possible by new technology, AI seems like a natural next step in the dizzying sequence of illusion and corruption that has rapidly turned the online world from a repository of cat videos (the good old days) to today's vortex of ceaselessly reproduced lies and corrupted content. Thinkers like Musk have long called for resistance against AI's unstoppable growth. In 2014, Musk called AI the single largest "existential threat" to humanity. That same year, the late physicist Stephen Hawking ominously predicted that sophisticated AI could "spell the end of the human race."

Stephen Hawking's apocalyptic visionsImage via longroom.com

But until AI achieves the singularity—a level of consciousness where it achieves and supersedes human intelligence—it is still privy to the whims of whoever is controlling it. Fears about whether AI will lend itself to fake news are essentially fears of things humans have already done. All the evil at work on the Internet has had a human source.

When it comes down to the wire, for now, AI is a weapon.

When AI is released into the world, a lot could happen. AI could become a victim, a repository for displaced human desire. Some have questioned whether people should be allowed to treat humanoid creatures in whatever ways they wish to. Instances of robot beheadings and other violent behaviors towards AI hint towards a darker trend that could emerge should AI become a free-for-all, a humanoid object that can be treated in any way on the basis of its presumed inhumanity.

Clearly, AI and humanity have a complex and fundamentally intertwined relationship, and as we all become more dependent on technology, there is less of a clear line dividing the human from the robotic. As a manmade invention, AI will inevitably emulate the traits (as well as the stereotypes) of the people who created it. It could also take on the violent tendencies of its human creators. Some thinkers have sounded the alarm about this, questioning the dearth of ethics in Silicon Valley and in the tech sphere on the whole. Many people believe that AI (and technology in general) is fundamentally free of bias and emotion, but a multitude of examples have shown that this is untrue, including instances where law enforcement software systems displayed racist bias against black people (based on data collected by humans).

AI can be just as prejudiced and close-minded as a human, if not more so, especially in its early stages where it is not sophisticated enough to think critically. An AI may not feel in and of itself, but—much like we learn how to process the world from our parents—it can learn how to process and understand emotions from the people who create it, and from the media it absorbs.

Image via techno-pundit.blogspot.com

After all, who could forget the TwitterBot who began spewing racist, anti-Semitic rants mere hours after its launch—rants that it, of course, learned from human Twitter users? Studies have estimated that 9 to 15 percent of all Twitter accounts are bots—but each one of these bots had to be created and programmed by a human being. Even if the bot was not created for a specific purpose, it still learns from the human presences around it.

A completely objective, totally nonhuman AI is kind of like the temperature absolute zero; it can exist only in theory. Since all AI is created by humans, it will inevitably take on human traits and beliefs. It will perform acts of evil when instructed to, or when exposed to ideologies that can inspire it to. It can also learn morality if its teachers choose to imbue it with the ability to tell right from wrong.

Image via cio.com

Their quandary may not be so different from the struggle parents face when deciding whether to allow their children to watch R-rated movies. In this case, both the general public and the AIs are the children, and the scientists, coders, and companies peddling new inventions are the parents. The people designing AIs have to determine the extent to which they can trust the public with their work. They also have to determine which aspects of humanity they want to expose their inventions to.

OpenAI may have kept their kid safe inside the house a little longer by freezing the GPT-2, but that kid is growing—and when it goes out into the world, it could change everything. For better or worse, at some point, super-intelligent AI is going to wind up in the public's hands. Now, during its tender, formative stages, there is still a chance to shape it into whom it's going to be when it arrives.


Eden Arielle Gordon is a writer and musician from New York City. Talk to her about AI on Twitter @edenarielmusic.

Automation and the Post-Labor Economy

Automation is set to replace a large portion of the American workforce. What do we do once it happens?

In his 1984 essay Is it O.K. to be a Luddite?, Thomas Pynchon predicted that "the next great challenge to watch out for will come when the curves of research and development in artificial intelligence, molecular biology and robotics all converge." Nearly 35 years later, that convergence is upon us. Barring some sort of federally enforced halt on technological progress, automation of most basic services is inevitable.

Self-driving cars are continuing to improve. Automated checkout lines are being implemented all over the American retail space. There are even programs being written that may be doing the majority of our accounting work in the future. Sadly, the common claim that technological advances and economic growth go hand in hand with job creation is spurious at best. In 1964, AT&T was worth $267 billion (adjusted for inflation) and employed upwards of 700,000 people. Today, Google, which is worth roughly twice as much as 1960s AT&T, only employs about 88,000. According to the McKinsey Global Institute up to 375 million people could be out of work by 2030. Unlike the second industrial revolution, which gave us cars and airplanes in the 20th century, the third industrial revolution probably won't create many new jobs. In fact, by that same 2030 mark, the U.S. could be staring down the barrel of 35% unemployment.

The specific numbers, which I've thoroughly explored here, aren't nearly as important as how the U.S. government chooses to address the issue. Mass unemployment is coming, and it's hard to even imagine what it might look like, let alone how we're going to deal with it. In his piece A World Without Work, Derek Thompson attempts to tackle this issue, comparing the future United States to the present Youngstown, Ohio, a once prosperous steel town that lost 50,000 jobs to overseas manufacturing in the late seventies. In the years following the steel industry's evaporation, the rates of depression, suicide, and spousal abuse all jumped up radically. According to professor Jonathan Russo, "Youngstown's story is America's story, because it shows that when jobs go away, the cultural cohesion of a place is destroy." Thompson's thesis is that work is so ingrained in the American psyche that regardless of whether or not we end up with a welfare state to take care of the millions of jobless, there will be civil unrest. Kurt Vonnegut came to a similar conclusion 63 years earlier in his book Player Piano, in which the government was forced to not only provide complete welfare for the unemployed masses, but fake jobs as well.

Kurt Vonnegut Jr. "Player Piano"

With regard to the impending employment drought, the government is left with a few options. They can ignore the issue, allowing millions to slip into grinding poverty, turning Youngstown, Ohio into the norm. This type of laissez-faire capitalism would have made Ronald Reagan blush but the problem is, with no money, there are no consumers. Another solution that's been popularized in recent years is Universal Basic Income, a program in which the government pays all of its citizens enough money to live, regardless of whether or not they're employed. Plenty of tech moguls, from Elon Musk to Mark Zuckerberg, have embraced the idea that the money made from technological advances should be, at least partially, given back to the people. On paper, it's a no-brainer. People need money to live, and companies need people to have money or else no one would buy anything. This would, as it were, keep the trains running on time. The problem is, this plan ignores Thompson's point about the vacancy of purpose left by a post-labor economy. There's a feeling of despair attached to having nothing to do. Anyone who's ever spent a teenage summer vacation not working can attest to this, and as evidenced by Youngtown, this listlessness can be destructive, both physically and psychologically.

Sign: "Welcome to Downtown Youngstown"

There is a third option however, one that's helped Germany lower its unemployment rate, called work-sharing. Essentially, the program cuts hours rather than employees. For example, if a company needs to cut 30% of its low level accounting staff, instead of firing 30% of its workers, it cuts everyone's hours by 30%. Conventional wisdom says that inoculating less efficient workers from layoffs while cutting the best workers' hours is a recipe for disaster, but we are entering a distinctly unconventional time. If employees are losing their jobs to hyper-efficient automation, the potential dip in productivity should be more than mitigated. That said, work-sharing doesn't completely fix the problem either. Unless major corporations suddenly start valuing benevolence over higher profit margins, less hours means less pay. Trim the hours enough and the results of work-sharing and the results of ignoring the problem altogether start looking eerily similar.

Maybe Vonnegut had it right in 1952. Maybe the only way to simultaneously combat mass unemployment and the moral corrosion that takes place when people feel they have no purpose is for the government to manufacture meaningless tasks for the millions of unemployed. It's possible that it doesn't have to be quite that bleak. American infrastructure is due for a massive overhaul, and construction workers aren't at particularly high risk of losing their jobs to robots any time soon. Some have suggested that the best way to address the automation problem is through a new New Deal, allowing people to find work rebuilding roads and bridges. But this is only a temporary fix. There's only so much infrastructure to rebuild. And, there's only so long before construction jobs in unpredictable conditions are automated out with the rest of the workforce. Of course, it's possible by then that American society will have found a way to define individual purpose as separate from occupation. If not, Youngstown may be a snapshot of the future.

Matt Clibanoff is a writer and editor based in New York City who covers music, politics, sports and pop culture. His editorial work can be found in Inked Magazine, Pop Dust, The Liberty Project, and All Things Go. His fiction has been published in Forth Magazine. -- Find Matt at his website and on Twitter: @mattclibanoff