Trending

How the Internet Is Changing Your Brain

In a way, we're all living in the matrix: going through our days within an illusion of freedom when really our lives are completely dictated by technology.

40-odd years ago, there was no such thing as a cell phone, and the only computers in existence took up entire rooms. Then the World Wide Web was born.

Keep reading...Show less

8K Writers Can’t Be Wronged - AI Platforms “Scraping” & Stealing Bestselling Books

Close to 8 thousand writers recently signed a letter from The Authors Guild protesting the unauthorized use of their stories.

Technology is inescapably linked to the art and craft of writing. Humanity’s desire to share and preserve its thoughts, its pleasures, its discoveries, its knowledge, and its very survival led to hieroglyphics, the development of paper and ink, to the printing press, the typewriter, and the computer. Technology’s traditional aim was to make it faster and easier to create and disseminate the written word. Now it seems technology’s out to eliminate writers altogether.

Or, at the very least, the writers’ livelihoods.

NPR’s Chloe Veltman tells us that The Authors Guild – an organization founded in 1912 to “support working writers and their ability to earn a living from authorship” – is taking on “artificial intelligence companies like OpenAI and Meta” which use writers’ work “without permission or compensation.” As Veltman describes it, “text-based generative AI applications like GPT-4 and Bard...scrape the Web for authors' content without permission or compensation and then use it to produce fresh content in response to users' prompts”.

Approximately eight thousand writers, Veltman reports, have signed a Guild letter protesting such unauthorized use of their material. Some of the better-known scribes include Nora Roberts, Viet Thanh Nguyen, Michael Chabon, and Margaret Atwood.

The Authors Guild’s petition is not the only action being taken in the wake of AI. Other writers have filed class-action suits against AI companies, claiming their work is being pirated. AI is one of the main reasons for the Writers Guild of America’s strike (starting May 2nd), bringing American film and television production to a complete standstill. The New York Times summarizes the WGA’s position: “Writers are asking the studios for guardrails against being replaced by AI, having their work used to train AI or being hired to punch up AI-generated scripts at a fraction of their former pay rates.”

Award-winning writer/director Doug Burch describes the WGA strike as “vital to the future of those wanting basic living wages...It’s truly despicable when CEOs make $400 million a year and say that writers and actors are being unrealistic wanting to at least make a living wage.”

And just what is this average yearly salary? A forthcoming report from The Authors Guild asserts that the median income for a full-time writer was $23,000 in 2022. This after, a precipitous 42% decline in writers' incomes between 2009 and 2019.

History proves time and again that the haves never give anything to the have-nots without being forced to “share the wealth.” Whether it's coal mining, auto manufacturing, or movie-making, it’s taken the commitment of generations of die-hard activists to help address an economic imbalance.

The writers have one huge strength, something no boss or executive can do without – their talent, their craft, originality, passion, and their grit. As I understand it, AI can synthesize, imitate, mimic a writer’s work. The one thing it can’t do is create original thought and original material. Writers – with their unique perspectives and experiences, their individual and idiosyncratic use of language, and their ability to capture human behavior in all its grunge and glory – cannot be replaced.

Books, films, non-fiction, graphic novels, and poems are not merely material to be scraped, stolen, and exploited. They’re not “a data set to be ingested by an AI program”, they hold our past, our future, our quotidian lives, they teach us what it is to be human. This is a writer’s work.

The message is clear – Support the writers.

Researchers Have Created an AI Too Dangerous to Release. What Will Happen When It Gets Out?

The GPT-2 software can generate fake news articles on its own. Its creators believe its existence may pose an existential threat to humanity. But it could also present a chance to intervene.

Researchers at OpenAI have created an artificial intelligence software so powerful that they have deemed it too dangerous for public release.

The software, called GPT-2, can generate cohesive, coherent text in multiple genres—including fiction, news, and unfiltered Internet rants—making it a prime candidate for creating fake news or fake profiles should it fall into the wrong hands.

Fears like this led the Elon Musk-founded company OpenAI to curtail the software's release. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," they announced in a blog post. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

In addition to writing a cohesive fictional story based on Lord of the Rings, the software wrote a logical scientific report about the discovery of unicorns. "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains," the software wrote. "Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science."

This journalistic aptitude sparked widespread fears that AI technologies as sophisticated as the GPT-2 could influence upcoming elections, potentially generating unfathomable amounts of partisan content in a single instant. "The idea here is you can use some of these tools in order to skew reality in your favor," said University of Washington professor Ryan Calo. "And I think that's what OpenAI worries about."

Elon Musk quit OpenAI in 2018, but his legacy of fear and paranoia regarding AI and its potential evils lives on. The specter of his caution was likely instrumental in keeping GPT-2 out of the public sphere. "It's quite uncanny how it behaves," echoed Jack Clark, policy director of OpenAI, when asked about his decision to keep the new software under locks.

The fears of Elon MuskImage via express.co.uk

In a world already plagued by fake news, cat-fishing, and other forms of illusion made possible by new technology, AI seems like a natural next step in the dizzying sequence of illusion and corruption that has rapidly turned the online world from a repository of cat videos (the good old days) to today's vortex of ceaselessly reproduced lies and corrupted content. Thinkers like Musk have long called for resistance against AI's unstoppable growth. In 2014, Musk called AI the single largest "existential threat" to humanity. That same year, the late physicist Stephen Hawking ominously predicted that sophisticated AI could "spell the end of the human race."

Stephen Hawking's apocalyptic visionsImage via longroom.com

But until AI achieves the singularity—a level of consciousness where it achieves and supersedes human intelligence—it is still privy to the whims of whoever is controlling it. Fears about whether AI will lend itself to fake news are essentially fears of things humans have already done. All the evil at work on the Internet has had a human source.

When it comes down to the wire, for now, AI is a weapon.

When AI is released into the world, a lot could happen. AI could become a victim, a repository for displaced human desire. Some have questioned whether people should be allowed to treat humanoid creatures in whatever ways they wish to. Instances of robot beheadings and other violent behaviors towards AI hint towards a darker trend that could emerge should AI become a free-for-all, a humanoid object that can be treated in any way on the basis of its presumed inhumanity.

Clearly, AI and humanity have a complex and fundamentally intertwined relationship, and as we all become more dependent on technology, there is less of a clear line dividing the human from the robotic. As a manmade invention, AI will inevitably emulate the traits (as well as the stereotypes) of the people who created it. It could also take on the violent tendencies of its human creators. Some thinkers have sounded the alarm about this, questioning the dearth of ethics in Silicon Valley and in the tech sphere on the whole. Many people believe that AI (and technology in general) is fundamentally free of bias and emotion, but a multitude of examples have shown that this is untrue, including instances where law enforcement software systems displayed racist bias against black people (based on data collected by humans).

AI can be just as prejudiced and close-minded as a human, if not more so, especially in its early stages where it is not sophisticated enough to think critically. An AI may not feel in and of itself, but—much like we learn how to process the world from our parents—it can learn how to process and understand emotions from the people who create it, and from the media it absorbs.

Image via techno-pundit.blogspot.com

After all, who could forget the TwitterBot who began spewing racist, anti-Semitic rants mere hours after its launch—rants that it, of course, learned from human Twitter users? Studies have estimated that 9 to 15 percent of all Twitter accounts are bots—but each one of these bots had to be created and programmed by a human being. Even if the bot was not created for a specific purpose, it still learns from the human presences around it.

A completely objective, totally nonhuman AI is kind of like the temperature absolute zero; it can exist only in theory. Since all AI is created by humans, it will inevitably take on human traits and beliefs. It will perform acts of evil when instructed to, or when exposed to ideologies that can inspire it to. It can also learn morality if its teachers choose to imbue it with the ability to tell right from wrong.

Image via cio.com

Their quandary may not be so different from the struggle parents face when deciding whether to allow their children to watch R-rated movies. In this case, both the general public and the AIs are the children, and the scientists, coders, and companies peddling new inventions are the parents. The people designing AIs have to determine the extent to which they can trust the public with their work. They also have to determine which aspects of humanity they want to expose their inventions to.

OpenAI may have kept their kid safe inside the house a little longer by freezing the GPT-2, but that kid is growing—and when it goes out into the world, it could change everything. For better or worse, at some point, super-intelligent AI is going to wind up in the public's hands. Now, during its tender, formative stages, there is still a chance to shape it into whom it's going to be when it arrives.


Eden Arielle Gordon is a writer and musician from New York City. Talk to her about AI on Twitter @edenarielmusic.

AI and the Hiring Process

Are games and algorithms the future of the interview process?

Even if you're qualified for the position you're applying for, many feel the interview process is a nerve-wracking experience, one that forces the interviewee to answer a series of exacting questions that have no real relevance to her ability to perform the job. From the employer side, things aren't much easier. HR reps and middle managers alike often find themselves with employees who look good on paper and talk a big game during their interview, but don't deliver once they've been hired. On top of this, there's nothing really stopping a potential employee from flat out lying during the hiring process. If an interviewee gets caught in a lie, she won't get hired, but she didn't have a job to begin with, so she's no worse for wear. In order to mitigate these and the myriad other difficulties associated with the hiring process, employers have started using (in a somewhat ironic twist) artificial intelligence to aid with recruiting.

Keep reading...Show less