“A tree is best measured when it is down,” the poet Carl Sandburg once observed, “and so it is with people.” The recent death of Harry Belafonte at the age of 96 has prompted many assessments of what this pioneering singer-actor-activist accomplished in a long and fruitful life.
Belafonte’s career as a ground-breaking entertainer brought him substantial wealth and fame; according to Playbill magazine, “By 1959, he was the highest paid Black entertainer in the industry, appearing in raucously successful engagements in Las Vegas, New York, and Los Angeles.” He scored on Broadway, winning a 1954 Tony for Best Featured Actor in a Musical – John Murray Anderson's Almanac. Belafonte was the first Black person to win the prestigious award. A 1960 television special, “Tonight with Belafonte,” brought him an Emmy for Outstanding Performance in a Variety or Musical Program or Series, making him the first Black person to win that award. He found equal success in the recording studio, bringing Calypso music to the masses via such hits as “Day-O (The Banana Boat Song)” and “Jamaica Farewell.”
Harry Belafonte - Day-O (The Banana Boat Song) (Live)www.youtube.com
Belafonte’s blockbuster stardom is all the more remarkable for happening in a world plagued by virulent systemic racism. Though he never stopped performing, by the early 1960s he’d shifted his energies to the nascent Civil Right movement. He was a friend and adviser to the Reverend Doctor Martin Luther King, Jr. and, as the New York Times stated, Belafonte “put up much of the seed money to help start the Student Nonviolent Coordinating Committee and was one of the principal fund-raisers for that organization and Dr. King’s Southern Christian Leadership Conference.”
The Southern Poverty Law Center notes that “he helped launch one of Mississippi’s first voter registration drives and provided funding for the Freedom Riders. His activism extended beyond the U.S. as he fought against apartheid alongside Nelson Mandela and Miriam Makeba, campaigned for Mandela’s release from prison, and advocated for famine relief in Africa.” And in 1987, he received an appointment to UNICEF as a goodwill ambassador.
Over a career spanning more than seventy years, Belafonte brought joy to millions of people. He also did something that is, perhaps, even greater: he fostered the hope that a better world for all could be created. And, by his example, demonstrated how we might go about bringing that world into existence.
Researchers Have Created an AI Too Dangerous to Release. What Will Happen When It Gets Out?
The GPT-2 software can generate fake news articles on its own. Its creators believe its existence may pose an existential threat to humanity. But it could also present a chance to intervene.
Researchers at OpenAI have created an artificial intelligence software so powerful that they have deemed it too dangerous for public release.
The software, called GPT-2, can generate cohesive, coherent text in multiple genres—including fiction, news, and unfiltered Internet rants—making it a prime candidate for creating fake news or fake profiles should it fall into the wrong hands.
Fears like this led the Elon Musk-founded company OpenAI to curtail the software's release. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," they announced in a blog post. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."
In addition to writing a cohesive fictional story based on Lord of the Rings, the software wrote a logical scientific report about the discovery of unicorns. "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains," the software wrote. "Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science."
This journalistic aptitude sparked widespread fears that AI technologies as sophisticated as the GPT-2 could influence upcoming elections, potentially generating unfathomable amounts of partisan content in a single instant. "The idea here is you can use some of these tools in order to skew reality in your favor," said University of Washington professor Ryan Calo. "And I think that's what OpenAI worries about."
Elon Musk quit OpenAI in 2018, but his legacy of fear and paranoia regarding AI and its potential evils lives on. The specter of his caution was likely instrumental in keeping GPT-2 out of the public sphere. "It's quite uncanny how it behaves," echoed Jack Clark, policy director of OpenAI, when asked about his decision to keep the new software under locks.
The fears of Elon MuskImage via express.co.uk
In a world already plagued by fake news, cat-fishing, and other forms of illusion made possible by new technology, AI seems like a natural next step in the dizzying sequence of illusion and corruption that has rapidly turned the online world from a repository of cat videos (the good old days) to today's vortex of ceaselessly reproduced lies and corrupted content. Thinkers like Musk have long called for resistance against AI's unstoppable growth. In 2014, Musk called AI the single largest "existential threat" to humanity. That same year, the late physicist Stephen Hawking ominously predicted that sophisticated AI could "spell the end of the human race."
Stephen Hawking's apocalyptic visionsImage via longroom.com
But until AI achieves the singularity—a level of consciousness where it achieves and supersedes human intelligence—it is still privy to the whims of whoever is controlling it. Fears about whether AI will lend itself to fake news are essentially fears of things humans have already done. All the evil at work on the Internet has had a human source.
When it comes down to the wire, for now, AI is a weapon.
When AI is released into the world, a lot could happen. AI could become a victim, a repository for displaced human desire. Some have questioned whether people should be allowed to treat humanoid creatures in whatever ways they wish to. Instances of robot beheadings and other violent behaviors towards AI hint towards a darker trend that could emerge should AI become a free-for-all, a humanoid object that can be treated in any way on the basis of its presumed inhumanity.
Clearly, AI and humanity have a complex and fundamentally intertwined relationship, and as we all become more dependent on technology, there is less of a clear line dividing the human from the robotic. As a manmade invention, AI will inevitably emulate the traits (as well as the stereotypes) of the people who created it. It could also take on the violent tendencies of its human creators. Some thinkers have sounded the alarm about this, questioning the dearth of ethics in Silicon Valley and in the tech sphere on the whole. Many people believe that AI (and technology in general) is fundamentally free of bias and emotion, but a multitude of examples have shown that this is untrue, including instances where law enforcement software systems displayed racist bias against black people (based on data collected by humans).
AI can be just as prejudiced and close-minded as a human, if not more so, especially in its early stages where it is not sophisticated enough to think critically. An AI may not feel in and of itself, but—much like we learn how to process the world from our parents—it can learn how to process and understand emotions from the people who create it, and from the media it absorbs.
Image via techno-pundit.blogspot.com
A completely objective, totally nonhuman AI is kind of like the temperature absolute zero; it can exist only in theory. Since all AI is created by humans, it will inevitably take on human traits and beliefs. It will perform acts of evil when instructed to, or when exposed to ideologies that can inspire it to. It can also learn morality if its teachers choose to imbue it with the ability to tell right from wrong.
Image via cio.com
Their quandary may not be so different from the struggle parents face when deciding whether to allow their children to watch R-rated movies. In this case, both the general public and the AIs are the children, and the scientists, coders, and companies peddling new inventions are the parents. The people designing AIs have to determine the extent to which they can trust the public with their work. They also have to determine which aspects of humanity they want to expose their inventions to.
OpenAI may have kept their kid safe inside the house a little longer by freezing the GPT-2, but that kid is growing—and when it goes out into the world, it could change everything. For better or worse, at some point, super-intelligent AI is going to wind up in the public's hands. Now, during its tender, formative stages, there is still a chance to shape it into whom it's going to be when it arrives.
Eden Arielle Gordon is a writer and musician from New York City. Talk to her about AI on Twitter @edenarielmusic.