“A tree is best measured when it is down,” the poet Carl Sandburg once observed, “and so it is with people.” The recent death of Harry Belafonte at the age of 96 has prompted many assessments of what this pioneering singer-actor-activist accomplished in a long and fruitful life.
Belafonte’s career as a ground-breaking entertainer brought him substantial wealth and fame; according to Playbill magazine, “By 1959, he was the highest paid Black entertainer in the industry, appearing in raucously successful engagements in Las Vegas, New York, and Los Angeles.” He scored on Broadway, winning a 1954 Tony for Best Featured Actor in a Musical – John Murray Anderson's Almanac. Belafonte was the first Black person to win the prestigious award. A 1960 television special, “Tonight with Belafonte,” brought him an Emmy for Outstanding Performance in a Variety or Musical Program or Series, making him the first Black person to win that award. He found equal success in the recording studio, bringing Calypso music to the masses via such hits as “Day-O (The Banana Boat Song)” and “Jamaica Farewell.”
Harry Belafonte - Day-O (The Banana Boat Song) (Live)www.youtube.com
Belafonte’s blockbuster stardom is all the more remarkable for happening in a world plagued by virulent systemic racism. Though he never stopped performing, by the early 1960s he’d shifted his energies to the nascent Civil Right movement. He was a friend and adviser to the Reverend Doctor Martin Luther King, Jr. and, as the New York Times stated, Belafonte “put up much of the seed money to help start the Student Nonviolent Coordinating Committee and was one of the principal fund-raisers for that organization and Dr. King’s Southern Christian Leadership Conference.”
The Southern Poverty Law Center notes that “he helped launch one of Mississippi’s first voter registration drives and provided funding for the Freedom Riders. His activism extended beyond the U.S. as he fought against apartheid alongside Nelson Mandela and Miriam Makeba, campaigned for Mandela’s release from prison, and advocated for famine relief in Africa.” And in 1987, he received an appointment to UNICEF as a goodwill ambassador.
Over a career spanning more than seventy years, Belafonte brought joy to millions of people. He also did something that is, perhaps, even greater: he fostered the hope that a better world for all could be created. And, by his example, demonstrated how we might go about bringing that world into existence.
8K Writers Can’t Be Wronged - AI Platforms “Scraping” & Stealing Bestselling Books
Close to 8 thousand writers recently signed a letter from The Authors Guild protesting the unauthorized use of their stories.
Technology is inescapably linked to the art and craft of writing. Humanity’s desire to share and preserve its thoughts, its pleasures, its discoveries, its knowledge, and its very survival led to hieroglyphics, the development of paper and ink, to the printing press, the typewriter, and the computer. Technology’s traditional aim was to make it faster and easier to create and disseminate the written word. Now it seems technology’s out to eliminate writers altogether.
Or, at the very least, the writers’ livelihoods.
NPR’s Chloe Veltman tells us that The Authors Guild – an organization founded in 1912 to “support working writers and their ability to earn a living from authorship” – is taking on “artificial intelligence companies like OpenAI and Meta” which use writers’ work “without permission or compensation.” As Veltman describes it, “text-based generative AI applications like GPT-4 and Bard...scrape the Web for authors' content without permission or compensation and then use it to produce fresh content in response to users' prompts”.
Approximately eight thousand writers, Veltman reports, have signed a Guild letter protesting such unauthorized use of their material. Some of the better-known scribes include Nora Roberts, Viet Thanh Nguyen, Michael Chabon, and Margaret Atwood.
The Authors Guild’s petition is not the only action being taken in the wake of AI. Other writers have filed class-action suits against AI companies, claiming their work is being pirated. AI is one of the main reasons for the Writers Guild of America’s strike (starting May 2nd), bringing American film and television production to a complete standstill. The New York Times summarizes the WGA’s position: “Writers are asking the studios for guardrails against being replaced by AI, having their work used to train AI or being hired to punch up AI-generated scripts at a fraction of their former pay rates.”
Award-winning writer/director Doug Burch describes the WGA strike as “vital to the future of those wanting basic living wages...It’s truly despicable when CEOs make $400 million a year and say that writers and actors are being unrealistic wanting to at least make a living wage.”
And just what is this average yearly salary? A forthcoming report from The Authors Guild asserts that the median income for a full-time writer was $23,000 in 2022. This after, a precipitous 42% decline in writers' incomes between 2009 and 2019.
History proves time and again that the haves never give anything to the have-nots without being forced to “share the wealth.” Whether it's coal mining, auto manufacturing, or movie-making, it’s taken the commitment of generations of die-hard activists to help address an economic imbalance.
The writers have one huge strength, something no boss or executive can do without – their talent, their craft, originality, passion, and their grit. As I understand it, AI can synthesize, imitate, mimic a writer’s work. The one thing it can’t do is create original thought and original material. Writers – with their unique perspectives and experiences, their individual and idiosyncratic use of language, and their ability to capture human behavior in all its grunge and glory – cannot be replaced.
Books, films, non-fiction, graphic novels, and poems are not merely material to be scraped, stolen, and exploited. They’re not “a data set to be ingested by an AI program”, they hold our past, our future, our quotidian lives, they teach us what it is to be human. This is a writer’s work.
The message is clear – Support the writers.
How the Internet Is Changing Your Brain
In a way, we're all living in the matrix: moving around within an illusion of freedom when really our lives are dictated by technology.
40-odd years ago, there was no such thing as a cell phone, and the only computers in existence took up entire rooms. Then the World Wide Web was born.
15 years ago, the iPhone was just a seed of a dream in Steve Jobs' mind. But today, if you're reading this, you have access to countless screens and endless amounts of information; and you probably have a phone in your pocket that you can't be separated from without experiencing a cold rush of panic. Like it or not, you live in the digital age.
Everything is happening so fast these days; it's hard to find the time to seriously question how technology has altered the fabric of our realities. But here are four major ways the Internet has made our minds different from how they were before—so much so that we can never go back.
1. We never have to wonder about anything
Once upon a time, if you were sitting at dinner and a question came up about, say, climate change or the effects of a certain drug, you would have to either find someone who knew the answer or wait until a library opened. Then you'd have to go there and parse through the Dewey Decimal System until you found a volume that might be able to provide the answer.
Today, we all have any piece of information, no matter how small or obscure, quite literally at our fingertips. So we should be smarter than ever, right? But all this instantly accessible information is coming at a price. One study found that millennials have even worse memories than seniors; and a recent Columbia University study revealed that if people feel they will be able to look up something in the future, they'll be less likely to remember it.
In his book The Shallows: What the Internet Is Doing to Our Brains, Nicholas Carr argues that technology is making us stupider, less likely to think critically and retain the information we need. Part of this is because every time we go online, we are confronted with billions of sources vying for our attention, making it difficult to deploy the kind of focused concentration needed to synthesize and reflect on information.
Also, now that we have endless information at our fingertips, many people have proposed that we may be less curious than ever, less inclined to come up with original ideas. However, curiosity is a fluid entity, and though the Internet offers more resources than ever, that also means that more people are creating content than ever before. And new innovative technologies are cropping up every day, revealing that although the Internet might be making some of us stupider, it's also a fertile breeding ground for incredible, world-changing inventions and unprecedentedly viral content.
2. We're more interconnected—and lonelier than ever
Once upon a time, you had to call someone up to speak to them, but now you can see what any of your friends are doing at any time. Instagram and Snapchat stories make it possible to share intimate images of our lives on a wide scale with huge audiences at any time; and online algorithms make it so that whatever you post will never really be gone from the Internet, even if you delete it. We can see the daily coffee choices and midnight tearstained selfies of our favorite stars; we can hit up old friends from across the globe with a single Facebook search.
Humans have always been hard-wired for connection, desperately looking for kinship and community, and so it makes sense that the Internet has become so addictive. Every ping, alert, and notification provokes the same kind of dopamine rush that comes from an expression of love and friendship. On the other hand, cyberbullying and persistently comparing oneself to others in the virtual sphere can both have very adverse effects in the real world.
Some studies have proposed that social media increases levels of loneliness. One found that heavy Facebook, Snapchat, and Instagram use can contribute to depression in young adults. Excessive time on Facebook has also been found to be associated with poor physical health and life satisfaction. On the other hand, social media has presented an opportunity for isolated adults and senior citizens to reach out and connect; and online fan and lifestyle communities provide oases for people all over the world.
Image via Business Insider
For better or for worse, the Internet has changed the way we connect. It's also changed the way we love. 26 million matches are made every day on dating apps, and roughly 13% of people who met on dating apps married. And phones allow us to communicate with anyone at any moment of the day, creating whole new rules and expectations for relationships, making them altogether more interactive and involved than they once were. Plus, adult entertainment is fundamentally changing the way we have sex, with
many studies revealing that it's lowering sex drives and creating unrealistic expectations across the board.
It's the same for work: a Fortune study found that the average white-collar worker spends three hours per day checking emails. This comes part and parcel with the gig economy, that staple of Millennial culture built on perpetual interconnectedness and 24/7 "hustle"—a phenomenon that often leads to burnout.
3. We can have more than one reality—or can hide inside our own worlds more easily than ever
The Internet has made it easier than ever to craft false personas and to embody illusory identities. We can use Photoshop to alter our appearances; we can leverage small talents to viral fame and huge monetary gains, and we can completely escape our world in exchange for online communities and ever-growing virtual and augmented reality options.
The Internet is also altering our perceptions of reality. Although people once thought that interconnected online communities would facilitate the sharing of diverse viewpoints, it has turned out that social media allows us to access echo chambers even more isolated and partisan than what we'd see in our real lives.
In short, we're all at risk of being catfished.
4. Many of us are completely addicted
When was the last time you went a day without checking your phone? A week? And do you think that if you needed to, you could quit? Most likely, the answer is no, so you'd better believe it: you're addicted to technology. But you're not alone. A 2017 study found that 210 million people may be addicted worldwide.
There are five primary types of Internet addictions: cybersexual addiction, net compulsions (online shopping), cyber relationships (online dating), gaming, and information seeking (surfing). In recent years, internet addiction rehab has grown in popularity. The majority of people with legitimate internet addiction problems are men in their teens to late thirties, but it's likely that we all suffer from this to some extent.
Image via the Fix
Although the Internet is changing everything about our lives, ultimately, there is no clear consensus on whether these changes are for the worse or the better. But the changes will be growing more extreme over the years. Moore's Law proposes that, essentially, overall technological processing power will double each year, indefinitely—meaning that technology will continue to advance at an unimaginable rate. If the past twenty years have given us iPhones, what will the next twenty bring? The next hundred, if we make it that far without global warming ending everything?
Only time will tell. We won't be the same—but then again, we were never meant to remain stagnant as a species. Change and chaos are the laws of the human race, and as a species, we've always been obsessed with progress.
Some theorists believe that technological progress will only end when we create an operating system more intelligent than we, in a revelatory event called the singularity. If this happens, the AI could decide to eliminate us. That's another story—but until then, the sky is the limit for innovators and consumers everywhere.
Eden Arielle Gordon is a writer and musician from New York City. Follow her on Twitter @edenarielmusic.
Researchers Have Created an AI Too Dangerous to Release. What Will Happen When It Gets Out?
The GPT-2 software can generate fake news articles on its own. Its creators believe its existence may pose an existential threat to humanity. But it could also present a chance to intervene.
Researchers at OpenAI have created an artificial intelligence software so powerful that they have deemed it too dangerous for public release.
The software, called GPT-2, can generate cohesive, coherent text in multiple genres—including fiction, news, and unfiltered Internet rants—making it a prime candidate for creating fake news or fake profiles should it fall into the wrong hands.
Fears like this led the Elon Musk-founded company OpenAI to curtail the software's release. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," they announced in a blog post. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."
In addition to writing a cohesive fictional story based on Lord of the Rings, the software wrote a logical scientific report about the discovery of unicorns. "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains," the software wrote. "Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science."
This journalistic aptitude sparked widespread fears that AI technologies as sophisticated as the GPT-2 could influence upcoming elections, potentially generating unfathomable amounts of partisan content in a single instant. "The idea here is you can use some of these tools in order to skew reality in your favor," said University of Washington professor Ryan Calo. "And I think that's what OpenAI worries about."
Elon Musk quit OpenAI in 2018, but his legacy of fear and paranoia regarding AI and its potential evils lives on. The specter of his caution was likely instrumental in keeping GPT-2 out of the public sphere. "It's quite uncanny how it behaves," echoed Jack Clark, policy director of OpenAI, when asked about his decision to keep the new software under locks.
The fears of Elon MuskImage via express.co.uk
In a world already plagued by fake news, cat-fishing, and other forms of illusion made possible by new technology, AI seems like a natural next step in the dizzying sequence of illusion and corruption that has rapidly turned the online world from a repository of cat videos (the good old days) to today's vortex of ceaselessly reproduced lies and corrupted content. Thinkers like Musk have long called for resistance against AI's unstoppable growth. In 2014, Musk called AI the single largest "existential threat" to humanity. That same year, the late physicist Stephen Hawking ominously predicted that sophisticated AI could "spell the end of the human race."
Stephen Hawking's apocalyptic visionsImage via longroom.com
But until AI achieves the singularity—a level of consciousness where it achieves and supersedes human intelligence—it is still privy to the whims of whoever is controlling it. Fears about whether AI will lend itself to fake news are essentially fears of things humans have already done. All the evil at work on the Internet has had a human source.
When it comes down to the wire, for now, AI is a weapon.
When AI is released into the world, a lot could happen. AI could become a victim, a repository for displaced human desire. Some have questioned whether people should be allowed to treat humanoid creatures in whatever ways they wish to. Instances of robot beheadings and other violent behaviors towards AI hint towards a darker trend that could emerge should AI become a free-for-all, a humanoid object that can be treated in any way on the basis of its presumed inhumanity.
Clearly, AI and humanity have a complex and fundamentally intertwined relationship, and as we all become more dependent on technology, there is less of a clear line dividing the human from the robotic. As a manmade invention, AI will inevitably emulate the traits (as well as the stereotypes) of the people who created it. It could also take on the violent tendencies of its human creators. Some thinkers have sounded the alarm about this, questioning the dearth of ethics in Silicon Valley and in the tech sphere on the whole. Many people believe that AI (and technology in general) is fundamentally free of bias and emotion, but a multitude of examples have shown that this is untrue, including instances where law enforcement software systems displayed racist bias against black people (based on data collected by humans).
AI can be just as prejudiced and close-minded as a human, if not more so, especially in its early stages where it is not sophisticated enough to think critically. An AI may not feel in and of itself, but—much like we learn how to process the world from our parents—it can learn how to process and understand emotions from the people who create it, and from the media it absorbs.
Image via techno-pundit.blogspot.com
A completely objective, totally nonhuman AI is kind of like the temperature absolute zero; it can exist only in theory. Since all AI is created by humans, it will inevitably take on human traits and beliefs. It will perform acts of evil when instructed to, or when exposed to ideologies that can inspire it to. It can also learn morality if its teachers choose to imbue it with the ability to tell right from wrong.
Image via cio.com
Their quandary may not be so different from the struggle parents face when deciding whether to allow their children to watch R-rated movies. In this case, both the general public and the AIs are the children, and the scientists, coders, and companies peddling new inventions are the parents. The people designing AIs have to determine the extent to which they can trust the public with their work. They also have to determine which aspects of humanity they want to expose their inventions to.
OpenAI may have kept their kid safe inside the house a little longer by freezing the GPT-2, but that kid is growing—and when it goes out into the world, it could change everything. For better or worse, at some point, super-intelligent AI is going to wind up in the public's hands. Now, during its tender, formative stages, there is still a chance to shape it into whom it's going to be when it arrives.
Eden Arielle Gordon is a writer and musician from New York City. Talk to her about AI on Twitter @edenarielmusic.
AI and the Hiring Process
Are games and algorithms the future of the interview process?
Even if you're qualified for the position you're applying for, many feel the interview process is a nerve-wracking experience, one that forces the interviewee to answer a series of exacting questions that have no real relevance to her ability to perform the job. From the employer side, things aren't much easier. HR reps and middle managers alike often find themselves with employees who look good on paper and talk a big game during their interview, but don't deliver once they've been hired. On top of this, there's nothing really stopping a potential employee from flat out lying during the hiring process. If an interviewee gets caught in a lie, she won't get hired, but she didn't have a job to begin with, so she's no worse for wear. In order to mitigate these and the myriad other difficulties associated with the hiring process, employers have started using (in a somewhat ironic twist) artificial intelligence to aid with recruiting.
Outside of the difficulties discussed above, one of the primary motivators for companies' move towards automated recruiting processes is money. It can cost nearly a quarter million dollars to recruit, hire, and onboard a new employee, and when someone turns out to be a dud, the effects can reverberate throughout the entire company. That said, it's not as if corporations have HAL from 2001 a Space Odyssey hand picking the optimal candidate, not yet at least. Different AI developers offer different things. For example, x.ai specializes in scheduling interviews, Filtered automatically generates coding challenges for aspiring programmers looking for work, and Clearfit has a tool that can rank potential candidates.
These programs, however useful, only free employers from having to do the low-level clerical work hiring. The bulk of the sorting and selecting of candidates still falls squarely on the shoulders of the hiring manager. Cue Pymetrics, a company built on the idea of replacing the way in which we conduct interviews and hire new employees. Pymetrics' AI uses a series of algorithms and cognitive science-based games to help pair employees and companies. The latter though, is what differentiates Pymetrics from the competition.
The idea is simple: when an employer gives an applicant a test or asks her a series of questions, the applicant answers in a way that comports with what she thinks the interviewer wants to hear. With an objective-based game where a clear goal is outlined, a candidate has a much harder time masking her methodology. The games Pymetrics develops reportedly measure 90 "cognitive, social and personality traits" and are used to gather data on a company's top performers. After enough data is collected, Pymetrics can then create the perfect composite employee. Every applicant is then measured against this composite, giving employers an objective look at who is best for the job.
The use of games is far from a passing trend however, and is not unique to Pymetrics. A Deloitte report recently revealed that nearly 30% of all business leaders use games and simulations in their hiring process. Unfortunately for companies hoping AI and algorithmic programs are the cure all for their (the companies') hiring woes, this report also concluded that over 70% of these business leaders found cognitive games to be a "weak" indicator of employee success. Still, to throw another wrench into the equation, there is a significant amount of evidence to support the idea that algorithms do outperform humans when it comes to hiring ideal candidates. In reality though, humans and AI systems are actually just better at different things. For example, a person, with only so much time in their day, can't accurately or quickly read through thousands of resumes and cover letters. But while algorithms are good at narrowing down selections and denying clear wrong fits, they aren't particularly well suited for sussing out passion or work ethic. There's also another, unforeseen issue attached to AI hiring.
AI and algorithmically-based hiring are supposedly unbiased and don't allow for pettiness, racism, or sexism to factor into their selection process. That said, today's leaders in AI technology are far from working out all the kinks. A crime-predicting algorithm in Florida recently labeled black people as potential criminals twice as often as it did white people. It also isn't an unrealistic jump in logic to suggest that an algorithm could see a demographic inconsistency, such as the best salesman at a particular firm happening to be male, and conclude that it should rank female job applicants lower. Pyretics, in particular, claims that its algorithms are rigorously designed to avoid this type of bias, but this issue not only calls AI's efficacy into question but its ethics as well. According to Sandra Wachter, a researcher at the Alan Turing Institute and the University of Oxford, "Algorithms force us to look into a mirror on society as it is," and that relying too heavily on data can make it seem as though our cultural biases are actually just inscrutable facts. This is what Arvind Narayanan, a professor of computer science at Princeton calls the "accuracy fetish," a fetish that's all too prevalent in Silicon Valley, a place in which AI is consistently touted as objective.
In a lot of ways, it's hard to argue against algorithmic hiring procedures. They save both time and money, and have been proven to work in several cases. The danger is not in this technology supplanting HR reps. People will continue to be a part of the interview process, if only for the reason that liking the people you work with is one of the most important facets of productivity. Algorithms only become a problem when they're treated as infallible oracles, capable of answering questions inaccessible to the human mind, rather than pieces of machinery. It's important to remember, the algorithm is a tool, an electric drill to the interview process's hand crank. AI isn't meant to replace human judgement, but to narrow the gap between rote tasks and decisions that require said judgement. In this metaphor, people aren't the hand crank or the electric drill; we're the screw.
That said, it's human nature to appeal to authority and the question that lies at the heart of the luddite's fear, is whether or not we can demystify this technology enough to continue trusting our guts over an algorithm's calculations. Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic." We'll find out soon enough whether or not he was right.