A return is almost always out of the question. Plus, gift givers don’t often include a return receipt, and we all know we wouldn’t dare ask for one. I’d rather admit to a crime than confess I don’t like a gift - how insulting to the gifter’s sense of aesthetics.
And-hey, I have limited drawer space. Who can keep these unwanted gifts for six months when there isn’t any space for them? I hate clutter, and unwanted gifts are just that.
This year, I am making an effort to swiftly remove any unwanted gifts from my house without hurting anyone’s feelings…and potentially benefiting others. As the old saying goes, one man’s trash is another man’s treasure. And thank goodness for that.
From the The Guardian:
“According to research published this week by the consumer body, one in four people (24%) received an unwanted or unsuitable gift for the Christmas of 2021. Meanwhile, a separate study by the personal finance comparison site Finder said £1.2bn was wasted on unwanted Christmas gifts each year.”
Come to terms with the fact that you will never use that gift and follow these quick tips to offload those unwanted gifts:
Donate
Sarah Brown via Unsplash
The most obvious choice for those unwanted pairs of mud-green sweat socks and that same fluffy robe you get every year from your Aunt Judy is to donate them. Just round up everything you don’t want and Google the donation center closest to you.
This is also a fantastic excuse to purge your closet of that pile of stuff you’ve been meaning to get rid of. A few bags of give-away-clothes will get your spring cleaning out of the way early.
Sell Them
Artificial Photography via Unsplash
Resale websites are all the rage right now. If you got a pair of pants that don’t fit or a sweater that isn’t your style, resell them on a website dedicated to just that. Sites like Poshmark, Mercari, and DePop are known for selling those trendy pieces of clothing you barely used.
Thrifting has never been hotter. Hop on the trend while people are constantly perusing sites for the hottest deal. Then reward yourself for being so virtuous, by dropping the cash on some fabulous things you’ll actually wear!
Re-Gift
Jackie S via Unsplash
If you got something that you think one of your friends or family can benefit from, why not give it to them? There’s no shame in revealing that it was a gift and you don’t want it anymore…as long as you aren’t re-gifting to the person who gave it to you!
Or, keep the gifts to re-gift at a later date. You never know when you’re going to need a last minute gift. You’ll thank yourself later.
Attempt a Return
Erik McLean via Unsplash
If your item still has a tag, you can make a valiant effort to return to the store. If you can make your case, many stores won’t want to fight you on it. They may be forgiving and grant you store credit at the very least.
How the Internet Is Changing Your Brain
In a way, we're all living in the matrix: moving around within an illusion of freedom when really our lives are dictated by technology.
40-odd years ago, there was no such thing as a cell phone, and the only computers in existence took up entire rooms. Then the World Wide Web was born.
15 years ago, the iPhone was just a seed of a dream in Steve Jobs' mind. But today, if you're reading this, you have access to countless screens and endless amounts of information; and you probably have a phone in your pocket that you can't be separated from without experiencing a cold rush of panic. Like it or not, you live in the digital age.
Everything is happening so fast these days; it's hard to find the time to seriously question how technology has altered the fabric of our realities. But here are four major ways the Internet has made our minds different from how they were before—so much so that we can never go back.
1. We never have to wonder about anything
Once upon a time, if you were sitting at dinner and a question came up about, say, climate change or the effects of a certain drug, you would have to either find someone who knew the answer or wait until a library opened. Then you'd have to go there and parse through the Dewey Decimal System until you found a volume that might be able to provide the answer.
Today, we all have any piece of information, no matter how small or obscure, quite literally at our fingertips. So we should be smarter than ever, right? But all this instantly accessible information is coming at a price. One study found that millennials have even worse memories than seniors; and a recent Columbia University study revealed that if people feel they will be able to look up something in the future, they'll be less likely to remember it.
In his book The Shallows: What the Internet Is Doing to Our Brains, Nicholas Carr argues that technology is making us stupider, less likely to think critically and retain the information we need. Part of this is because every time we go online, we are confronted with billions of sources vying for our attention, making it difficult to deploy the kind of focused concentration needed to synthesize and reflect on information.
Also, now that we have endless information at our fingertips, many people have proposed that we may be less curious than ever, less inclined to come up with original ideas. However, curiosity is a fluid entity, and though the Internet offers more resources than ever, that also means that more people are creating content than ever before. And new innovative technologies are cropping up every day, revealing that although the Internet might be making some of us stupider, it's also a fertile breeding ground for incredible, world-changing inventions and unprecedentedly viral content.
2. We're more interconnected—and lonelier than ever
Once upon a time, you had to call someone up to speak to them, but now you can see what any of your friends are doing at any time. Instagram and Snapchat stories make it possible to share intimate images of our lives on a wide scale with huge audiences at any time; and online algorithms make it so that whatever you post will never really be gone from the Internet, even if you delete it. We can see the daily coffee choices and midnight tearstained selfies of our favorite stars; we can hit up old friends from across the globe with a single Facebook search.
Humans have always been hard-wired for connection, desperately looking for kinship and community, and so it makes sense that the Internet has become so addictive. Every ping, alert, and notification provokes the same kind of dopamine rush that comes from an expression of love and friendship. On the other hand, cyberbullying and persistently comparing oneself to others in the virtual sphere can both have very adverse effects in the real world.
Some studies have proposed that social media increases levels of loneliness. One found that heavy Facebook, Snapchat, and Instagram use can contribute to depression in young adults. Excessive time on Facebook has also been found to be associated with poor physical health and life satisfaction. On the other hand, social media has presented an opportunity for isolated adults and senior citizens to reach out and connect; and online fan and lifestyle communities provide oases for people all over the world.
Image via Business Insider
For better or for worse, the Internet has changed the way we connect. It's also changed the way we love. 26 million matches are made every day on dating apps, and roughly 13% of people who met on dating apps married. And phones allow us to communicate with anyone at any moment of the day, creating whole new rules and expectations for relationships, making them altogether more interactive and involved than they once were. Plus, adult entertainment is fundamentally changing the way we have sex, with
many studies revealing that it's lowering sex drives and creating unrealistic expectations across the board.
It's the same for work: a Fortune study found that the average white-collar worker spends three hours per day checking emails. This comes part and parcel with the gig economy, that staple of Millennial culture built on perpetual interconnectedness and 24/7 "hustle"—a phenomenon that often leads to burnout.
3. We can have more than one reality—or can hide inside our own worlds more easily than ever
The Internet has made it easier than ever to craft false personas and to embody illusory identities. We can use Photoshop to alter our appearances; we can leverage small talents to viral fame and huge monetary gains, and we can completely escape our world in exchange for online communities and ever-growing virtual and augmented reality options.
The Internet is also altering our perceptions of reality. Although people once thought that interconnected online communities would facilitate the sharing of diverse viewpoints, it has turned out that social media allows us to access echo chambers even more isolated and partisan than what we'd see in our real lives.
In short, we're all at risk of being catfished.
4. Many of us are completely addicted
When was the last time you went a day without checking your phone? A week? And do you think that if you needed to, you could quit? Most likely, the answer is no, so you'd better believe it: you're addicted to technology. But you're not alone. A 2017 study found that 210 million people may be addicted worldwide.
There are five primary types of Internet addictions: cybersexual addiction, net compulsions (online shopping), cyber relationships (online dating), gaming, and information seeking (surfing). In recent years, internet addiction rehab has grown in popularity. The majority of people with legitimate internet addiction problems are men in their teens to late thirties, but it's likely that we all suffer from this to some extent.
Image via the Fix
Although the Internet is changing everything about our lives, ultimately, there is no clear consensus on whether these changes are for the worse or the better. But the changes will be growing more extreme over the years. Moore's Law proposes that, essentially, overall technological processing power will double each year, indefinitely—meaning that technology will continue to advance at an unimaginable rate. If the past twenty years have given us iPhones, what will the next twenty bring? The next hundred, if we make it that far without global warming ending everything?
Only time will tell. We won't be the same—but then again, we were never meant to remain stagnant as a species. Change and chaos are the laws of the human race, and as a species, we've always been obsessed with progress.
Some theorists believe that technological progress will only end when we create an operating system more intelligent than we, in a revelatory event called the singularity. If this happens, the AI could decide to eliminate us. That's another story—but until then, the sky is the limit for innovators and consumers everywhere.
Eden Arielle Gordon is a writer and musician from New York City. Follow her on Twitter @edenarielmusic.
Researchers Have Created an AI Too Dangerous to Release. What Will Happen When It Gets Out?
The GPT-2 software can generate fake news articles on its own. Its creators believe its existence may pose an existential threat to humanity. But it could also present a chance to intervene.
Researchers at OpenAI have created an artificial intelligence software so powerful that they have deemed it too dangerous for public release.
The software, called GPT-2, can generate cohesive, coherent text in multiple genres—including fiction, news, and unfiltered Internet rants—making it a prime candidate for creating fake news or fake profiles should it fall into the wrong hands.
Fears like this led the Elon Musk-founded company OpenAI to curtail the software's release. "Due to our concerns about malicious applications of the technology, we are not releasing the trained model," they announced in a blog post. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."
In addition to writing a cohesive fictional story based on Lord of the Rings, the software wrote a logical scientific report about the discovery of unicorns. "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains," the software wrote. "Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The scientist named the population, after their distinctive horn, Ovid's Unicorn. These four-horned, silver-white unicorns were previously unknown to science."
This journalistic aptitude sparked widespread fears that AI technologies as sophisticated as the GPT-2 could influence upcoming elections, potentially generating unfathomable amounts of partisan content in a single instant. "The idea here is you can use some of these tools in order to skew reality in your favor," said University of Washington professor Ryan Calo. "And I think that's what OpenAI worries about."
Elon Musk quit OpenAI in 2018, but his legacy of fear and paranoia regarding AI and its potential evils lives on. The specter of his caution was likely instrumental in keeping GPT-2 out of the public sphere. "It's quite uncanny how it behaves," echoed Jack Clark, policy director of OpenAI, when asked about his decision to keep the new software under locks.
The fears of Elon MuskImage via express.co.uk
In a world already plagued by fake news, cat-fishing, and other forms of illusion made possible by new technology, AI seems like a natural next step in the dizzying sequence of illusion and corruption that has rapidly turned the online world from a repository of cat videos (the good old days) to today's vortex of ceaselessly reproduced lies and corrupted content. Thinkers like Musk have long called for resistance against AI's unstoppable growth. In 2014, Musk called AI the single largest "existential threat" to humanity. That same year, the late physicist Stephen Hawking ominously predicted that sophisticated AI could "spell the end of the human race."
Stephen Hawking's apocalyptic visionsImage via longroom.com
But until AI achieves the singularity—a level of consciousness where it achieves and supersedes human intelligence—it is still privy to the whims of whoever is controlling it. Fears about whether AI will lend itself to fake news are essentially fears of things humans have already done. All the evil at work on the Internet has had a human source.
When it comes down to the wire, for now, AI is a weapon.
When AI is released into the world, a lot could happen. AI could become a victim, a repository for displaced human desire. Some have questioned whether people should be allowed to treat humanoid creatures in whatever ways they wish to. Instances of robot beheadings and other violent behaviors towards AI hint towards a darker trend that could emerge should AI become a free-for-all, a humanoid object that can be treated in any way on the basis of its presumed inhumanity.
Clearly, AI and humanity have a complex and fundamentally intertwined relationship, and as we all become more dependent on technology, there is less of a clear line dividing the human from the robotic. As a manmade invention, AI will inevitably emulate the traits (as well as the stereotypes) of the people who created it. It could also take on the violent tendencies of its human creators. Some thinkers have sounded the alarm about this, questioning the dearth of ethics in Silicon Valley and in the tech sphere on the whole. Many people believe that AI (and technology in general) is fundamentally free of bias and emotion, but a multitude of examples have shown that this is untrue, including instances where law enforcement software systems displayed racist bias against black people (based on data collected by humans).
AI can be just as prejudiced and close-minded as a human, if not more so, especially in its early stages where it is not sophisticated enough to think critically. An AI may not feel in and of itself, but—much like we learn how to process the world from our parents—it can learn how to process and understand emotions from the people who create it, and from the media it absorbs.
Image via techno-pundit.blogspot.com
A completely objective, totally nonhuman AI is kind of like the temperature absolute zero; it can exist only in theory. Since all AI is created by humans, it will inevitably take on human traits and beliefs. It will perform acts of evil when instructed to, or when exposed to ideologies that can inspire it to. It can also learn morality if its teachers choose to imbue it with the ability to tell right from wrong.
Image via cio.com
Their quandary may not be so different from the struggle parents face when deciding whether to allow their children to watch R-rated movies. In this case, both the general public and the AIs are the children, and the scientists, coders, and companies peddling new inventions are the parents. The people designing AIs have to determine the extent to which they can trust the public with their work. They also have to determine which aspects of humanity they want to expose their inventions to.
OpenAI may have kept their kid safe inside the house a little longer by freezing the GPT-2, but that kid is growing—and when it goes out into the world, it could change everything. For better or worse, at some point, super-intelligent AI is going to wind up in the public's hands. Now, during its tender, formative stages, there is still a chance to shape it into whom it's going to be when it arrives.
Eden Arielle Gordon is a writer and musician from New York City. Talk to her about AI on Twitter @edenarielmusic.
AI and the Hiring Process
Are games and algorithms the future of the interview process?
Even if you're qualified for the position you're applying for, many feel the interview process is a nerve-wracking experience, one that forces the interviewee to answer a series of exacting questions that have no real relevance to her ability to perform the job. From the employer side, things aren't much easier. HR reps and middle managers alike often find themselves with employees who look good on paper and talk a big game during their interview, but don't deliver once they've been hired. On top of this, there's nothing really stopping a potential employee from flat out lying during the hiring process. If an interviewee gets caught in a lie, she won't get hired, but she didn't have a job to begin with, so she's no worse for wear. In order to mitigate these and the myriad other difficulties associated with the hiring process, employers have started using (in a somewhat ironic twist) artificial intelligence to aid with recruiting.
Outside of the difficulties discussed above, one of the primary motivators for companies' move towards automated recruiting processes is money. It can cost nearly a quarter million dollars to recruit, hire, and onboard a new employee, and when someone turns out to be a dud, the effects can reverberate throughout the entire company. That said, it's not as if corporations have HAL from 2001 a Space Odyssey hand picking the optimal candidate, not yet at least. Different AI developers offer different things. For example, x.ai specializes in scheduling interviews, Filtered automatically generates coding challenges for aspiring programmers looking for work, and Clearfit has a tool that can rank potential candidates.
These programs, however useful, only free employers from having to do the low-level clerical work hiring. The bulk of the sorting and selecting of candidates still falls squarely on the shoulders of the hiring manager. Cue Pymetrics, a company built on the idea of replacing the way in which we conduct interviews and hire new employees. Pymetrics' AI uses a series of algorithms and cognitive science-based games to help pair employees and companies. The latter though, is what differentiates Pymetrics from the competition.
The idea is simple: when an employer gives an applicant a test or asks her a series of questions, the applicant answers in a way that comports with what she thinks the interviewer wants to hear. With an objective-based game where a clear goal is outlined, a candidate has a much harder time masking her methodology. The games Pymetrics develops reportedly measure 90 "cognitive, social and personality traits" and are used to gather data on a company's top performers. After enough data is collected, Pymetrics can then create the perfect composite employee. Every applicant is then measured against this composite, giving employers an objective look at who is best for the job.
The use of games is far from a passing trend however, and is not unique to Pymetrics. A Deloitte report recently revealed that nearly 30% of all business leaders use games and simulations in their hiring process. Unfortunately for companies hoping AI and algorithmic programs are the cure all for their (the companies') hiring woes, this report also concluded that over 70% of these business leaders found cognitive games to be a "weak" indicator of employee success. Still, to throw another wrench into the equation, there is a significant amount of evidence to support the idea that algorithms do outperform humans when it comes to hiring ideal candidates. In reality though, humans and AI systems are actually just better at different things. For example, a person, with only so much time in their day, can't accurately or quickly read through thousands of resumes and cover letters. But while algorithms are good at narrowing down selections and denying clear wrong fits, they aren't particularly well suited for sussing out passion or work ethic. There's also another, unforeseen issue attached to AI hiring.
AI and algorithmically-based hiring are supposedly unbiased and don't allow for pettiness, racism, or sexism to factor into their selection process. That said, today's leaders in AI technology are far from working out all the kinks. A crime-predicting algorithm in Florida recently labeled black people as potential criminals twice as often as it did white people. It also isn't an unrealistic jump in logic to suggest that an algorithm could see a demographic inconsistency, such as the best salesman at a particular firm happening to be male, and conclude that it should rank female job applicants lower. Pyretics, in particular, claims that its algorithms are rigorously designed to avoid this type of bias, but this issue not only calls AI's efficacy into question but its ethics as well. According to Sandra Wachter, a researcher at the Alan Turing Institute and the University of Oxford, "Algorithms force us to look into a mirror on society as it is," and that relying too heavily on data can make it seem as though our cultural biases are actually just inscrutable facts. This is what Arvind Narayanan, a professor of computer science at Princeton calls the "accuracy fetish," a fetish that's all too prevalent in Silicon Valley, a place in which AI is consistently touted as objective.
In a lot of ways, it's hard to argue against algorithmic hiring procedures. They save both time and money, and have been proven to work in several cases. The danger is not in this technology supplanting HR reps. People will continue to be a part of the interview process, if only for the reason that liking the people you work with is one of the most important facets of productivity. Algorithms only become a problem when they're treated as infallible oracles, capable of answering questions inaccessible to the human mind, rather than pieces of machinery. It's important to remember, the algorithm is a tool, an electric drill to the interview process's hand crank. AI isn't meant to replace human judgement, but to narrow the gap between rote tasks and decisions that require said judgement. In this metaphor, people aren't the hand crank or the electric drill; we're the screw.
That said, it's human nature to appeal to authority and the question that lies at the heart of the luddite's fear, is whether or not we can demystify this technology enough to continue trusting our guts over an algorithm's calculations. Arthur C. Clarke once said, "Any sufficiently advanced technology is indistinguishable from magic." We'll find out soon enough whether or not he was right.