In a way, we're all living in the matrix: moving around within an illusion of freedom when really our lives are dictated by technology.
40-odd years ago, there was no such thing as a cell phone, and the only computers in existence took up entire rooms. Then the World Wide Web was born.
15 years ago, the iPhone was just a seed of a dream in Steve Jobs' mind. But today, if you're reading this, you have access to countless screens and endless amounts of information; and you probably have a phone in your pocket that you can't be separated from without experiencing a cold rush of panic. Like it or not, you live in the digital age.
Everything is happening so fast these days; it's hard to find the time to seriously question how technology has altered the fabric of our realities. But here are four major ways the Internet has made our minds different from how they were before—so much so that we can never go back.
1. We never have to wonder about anything
Once upon a time, if you were sitting at dinner and a question came up about, say, climate change or the effects of a certain drug, you would have to either find someone who knew the answer or wait until a library opened. Then you'd have to go there and parse through the Dewey Decimal System until you found a volume that might be able to provide the answer.
Today, we all have any piece of information, no matter how small or obscure, quite literally at our fingertips. So we should be smarter than ever, right? But all this instantly accessible information is coming at a price. One study found that millennials have even worse memories than seniors; and a recent Columbia University study revealed that if people feel they will be able to look up something in the future, they'll be less likely to remember it.
In his book The Shallows: What the Internet Is Doing to Our Brains, Nicholas Carr argues that technology is making us stupider, less likely to think critically and retain the information we need. Part of this is because every time we go online, we are confronted with billions of sources vying for our attention, making it difficult to deploy the kind of focused concentration needed to synthesize and reflect on information.
Also, now that we have endless information at our fingertips, many people have proposed that we may be less curious than ever, less inclined to come up with original ideas. However, curiosity is a fluid entity, and though the Internet offers more resources than ever, that also means that more people are creating content than ever before. And new innovative technologies are cropping up every day, revealing that although the Internet might be making some of us stupider, it's also a fertile breeding ground for incredible, world-changing inventions and unprecedentedly viral content.
2. We're more interconnected—and lonelier than ever
Once upon a time, you had to call someone up to speak to them, but now you can see what any of your friends are doing at any time. Instagram and Snapchat stories make it possible to share intimate images of our lives on a wide scale with huge audiences at any time; and online algorithms make it so that whatever you post will never really be gone from the Internet, even if you delete it. We can see the daily coffee choices and midnight tearstained selfies of our favorite stars; we can hit up old friends from across the globe with a single Facebook search.
Humans have always been hard-wired for connection, desperately looking for kinship and community, and so it makes sense that the Internet has become so addictive. Every ping, alert, and notification provokes the same kind of dopamine rush that comes from an expression of love and friendship. On the other hand, cyberbullying and persistently comparing oneself to others in the virtual sphere can both have very adverse effects in the real world.
Some studies have proposed that social media increases levels of loneliness. One found that heavy Facebook, Snapchat, and Instagram use can contribute to depression in young adults. Excessive time on Facebook has also been found to be associated with poor physical health and life satisfaction. On the other hand, social media has presented an opportunity for isolated adults and senior citizens to reach out and connect; and online fan and lifestyle communities provide oases for people all over the world.
Image via Business Insider
For better or for worse, the Internet has changed the way we connect. It's also changed the way we love. 26 million matches are made every day on dating apps, and roughly 13% of people who met on dating apps married. And phones allow us to communicate with anyone at any moment of the day, creating whole new rules and expectations for relationships, making them altogether more interactive and involved than they once were. Plus, pornography is fundamentally changing the way we have sex, with
many studies revealing that it's lowering sex drives and creating unrealistic expectations across the board.
It's the same for work: a Fortune study found that the average white-collar worker spends three hours per day checking emails. This comes part and parcel with the gig economy, that staple of Millennial culture built on perpetual interconnectedness and 24/7 "hustle"—a phenomenon that often leads to burnout.
3. We can have more than one reality—or can hide inside our own worlds more easily than ever
The Internet has made it easier than ever to craft false personas and to embody illusory identities. We can use Photoshop to alter our appearances; we can leverage small talents to viral fame and huge monetary gains, and we can completely escape our world in exchange for online communities and ever-growing virtual and augmented reality options.
The Internet is also altering our perceptions of reality. Although people once thought that interconnected online communities would facilitate the sharing of diverse viewpoints, it has turned out that social media allows us to access echo chambers even more isolated and partisan than what we'd see in our real lives.
In short, we're all at risk of being catfished.
4. Many of us are completely addicted
When was the last time you went a day without checking your phone? A week? And do you think that if you needed to, you could quit? Most likely, the answer is no, so you'd better believe it: you're addicted to technology. But you're not alone. A 2017 study found that 210 million people may be addicted worldwide.
There are five primary types of Internet addictions: cybersexual (porn) addiction, net compulsions (online shopping), cyber relationships (online dating), gaming, and information seeking (surfing). In recent years, internet addiction rehab has grown in popularity. The majority of people with legitimate internet addiction problems are men in their teens to late thirties, but it's likely that we all suffer from this to some extent.
Image via the Fix
Although the Internet is changing everything about our lives, ultimately, there is no clear consensus on whether these changes are for the worse or the better. But the changes will be growing more extreme over the years. Moore's Law proposes that, essentially, overall technological processing power will double each year, indefinitely—meaning that technology will continue to advance at an unimaginable rate. If the past twenty years have given us iPhones, what will the next twenty bring? The next hundred, if we make it that far without global warming ending everything?
Only time will tell. We won't be the same—but then again, we were never meant to remain stagnant as a species. Change and chaos are the laws of the human race, and as a species, we've always been obsessed with progress.
Some theorists believe that technological progress will only end when we create an operating system more intelligent than we, in a revelatory event called the singularity. If this happens, the AI could decide to eliminate us. That's another story—but until then, the sky is the limit for innovators and consumers everywhere.
Eden Arielle Gordon is a writer and musician from New York City. Follow her on Twitter @edenarielmusic.
The GPT-2 software can generate fake news articles on its own. Its creators believe its existence may pose an existential threat to humanity. But it could also present a chance to intervene.
Researchers at OpenAI have created an artificial intelligence software so powerful that they have deemed it too dangerous for public release.
The software, called GPT-2, can generate cohesive, coherent text in multiple genres—including fiction, news, and unfiltered Internet rants—making it a prime candidate for creating fake news or fake profiles should it fall into the wrong hands.
Fears like this led the Elon Musk-founded company OpenAI to curtail the software's release. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," they announced in a blog post. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."
In addition to writing a cohesive fictional story based on Lord of the Rings, the software wrote a logical scientific report about the discovery of unicorns. "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains," the software wrote. "Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science."
This journalistic aptitude sparked widespread fears that AI technologies as sophisticated as the GPT-2 could influence upcoming elections, potentially generating unfathomable amounts of partisan content in a single instant. "The idea here is you can use some of these tools in order to skew reality in your favor," said University of Washington professor Ryan Calo. "And I think that's what OpenAI worries about."
Elon Musk quit OpenAI in 2018, but his legacy of fear and paranoia regarding AI and its potential evils lives on. The specter of his caution was likely instrumental in keeping GPT-2 out of the public sphere. "It's quite uncanny how it behaves," echoed Jack Clark, policy director of OpenAI, when asked about his decision to keep the new software under locks.
The fears of Elon MuskImage via express.co.uk
In a world already plagued by fake news, cat-fishing, and other forms of illusion made possible by new technology, AI seems like a natural next step in the dizzying sequence of illusion and corruption that has rapidly turned the online world from a repository of cat videos (the good old days) to today's vortex of ceaselessly reproduced lies and corrupted content. Thinkers like Musk have long called for resistance against AI's unstoppable growth. In 2014, Musk called AI the single largest "existential threat" to humanity. That same year, the late physicist Stephen Hawking ominously predicted that sophisticated AI could "spell the end of the human race."
Stephen Hawking's apocalyptic visionsImage via longroom.com
But until AI achieves the singularity—a level of consciousness where it achieves and supersedes human intelligence—it is still privy to the whims of whoever is controlling it. Fears about whether AI will lend itself to fake news are essentially fears of things humans have already done. All the evil at work on the Internet has had a human source.
When it comes down to the wire, for now, AI is a weapon.
When AI is released into the world, a lot could happen. AI could become a victim, a repository for displaced human desire. Some have questioned whether people should be allowed to treat humanoid creatures in whatever ways they wish to. Instances of robot beheadings and other violent behaviors towards AI hint towards a darker trend that could emerge should AI become a free-for-all, a humanoid object that can be treated in any way on the basis of its presumed inhumanity.
Clearly, AI and humanity have a complex and fundamentally intertwined relationship, and as we all become more dependent on technology, there is less of a clear line dividing the human from the robotic. As a manmade invention, AI will inevitably emulate the traits (as well as the stereotypes) of the people who created it. It could also take on the violent tendencies of its human creators. Some thinkers have sounded the alarm about this, questioning the dearth of ethics in Silicon Valley and in the tech sphere on the whole. Many people believe that AI (and technology in general) is fundamentally free of bias and emotion, but a multitude of examples have shown that this is untrue, including instances where law enforcement software systems displayed racist bias against black people (based on data collected by humans).
AI can be just as prejudiced and close-minded as a human, if not more so, especially in its early stages where it is not sophisticated enough to think critically. An AI may not feel in and of itself, but—much like we learn how to process the world from our parents—it can learn how to process and understand emotions from the people who create it, and from the media it absorbs.
Image via techno-pundit.blogspot.comAfter all, who could forget the TwitterBot who began spewing racist, anti-Semitic rants mere hours after its launch—rants that it, of course, learned from human Twitter users? Studies have estimated that 9 to 15 percent of all Twitter accounts are bots—but each one of these bots had to be created and programmed by a human being. Even if the bot was not created for a specific purpose, it still learns from the human presences around it.
A completely objective, totally nonhuman AI is kind of like the temperature absolute zero; it can exist only in theory. Since all AI is created by humans, it will inevitably take on human traits and beliefs. It will perform acts of evil when instructed to, or when exposed to ideologies that can inspire it to. It can also learn morality if its teachers choose to imbue it with the ability to tell right from wrong.
Image via cio.com
Their quandary may not be so different from the struggle parents face when deciding whether to allow their children to watch R-rated movies. In this case, both the general public and the AIs are the children, and the scientists, coders, and companies peddling new inventions are the parents. The people designing AIs have to determine the extent to which they can trust the public with their work. They also have to determine which aspects of humanity they want to expose their inventions to.
OpenAI may have kept their kid safe inside the house a little longer by freezing the GPT-2, but that kid is growing—and when it goes out into the world, it could change everything. For better or worse, at some point, super-intelligent AI is going to wind up in the public's hands. Now, during its tender, formative stages, there is still a chance to shape it into whom it's going to be when it arrives.
Eden Arielle Gordon is a writer and musician from New York City. Talk to her about AI on Twitter @edenarielmusic.