When Selena Gomez launched Rare Beauty back in 2020, the message was simple: break down previous notions that everyone must be perfect, and shine a light on mental health issues.
While this may have broken every budding makeup brand’s dream, brands like Fenty Beauty shared similar, groundbreaking mission statements: bolster inclusivity in the makeup industry and force all brands to do the same in the process.
Inspired by her 2020 album, Rare, Rare Beauty began with the basics: 48 foundation shades, lip balms and matte lip creams, eyebrow definers, and the icon, liquid blush. Four years later, it’s hard to imagine a more viral, innovative celebrity makeup brand that remains in stride with Fenty.
Quickly, the Rare Beauty Soft Pinch Liquid Blush became TikTok’s go-to staple product. And no one can deny there is no blush on the market that is as pigmented, easily blendable, and long-lasting as this one. Selena Gomez has proven herself a bonafide content creator with her charismatic social media posts for fun Rare Beauty launches like an under-eye brightener, an SPF-laden tinted moisturizer, and lip combos.
Not only is Rare Beauty inclusive in shade range, but the spherical shape of the top of their products is disability-friendly.
As of 2024, Rare Beauty is a $2 billion company. But what sets this company apart is their attention to detail and true dedication to bettering the world. The same year that Rare Beauty was founded, the Rare Impact Fund was also created.
What Is The Rare Impact Fund?
In a statement by Gomez on the Rare Impact Fund’s website, she states,
“The Rare Impact Fund is committed to expanding access to mental health services and education for young people everywhere. We work with a strong network of supporters and experts to bring mental health resources into educational settings to reach young people.
Because no one– regardless of age, race, gender, sexual orientation, or background - should struggle alone.”
Upon their start, the Rare Impact Fund committed to raising $100 million by 2030. Along with corporate sponsorships and donations from individuals, 1% of proceeds from all Rare Beauty sales go towards the charity as well. By 2021, they had donated over $1.2 million in grants to eight mental health institutions including Yale Center for Emotional Intelligence.
In 2021, the Rare Impact Fund launched a GoFundMe for their new Mental Health 101 initiative. According to the GoFundMe,
“Mental Health 101 advocates for more mental health in education, empowers our community, and encourages financial support for more mental health services in educational settings through the Rare Impact Fund,”
Promising to match up to $200,000 in donations, to date the GoFundMe has raised over $500,000 and has donations from less than six months ago.
How The Rare Impact Fund Works
By leveraging both Selena Gomez’s millions of social media followers and the four million people who follow Rare Beauty on Instagram, the Rare Impact Fund quickly trickles into visibility. Suddenly, fans of the brand and Gomez alike can help make a difference by donating even a few dollars in honor of their favorite actress-singer extraordinaire.
As of 2023, the Rare Impact Fund helped grantees like UCLA Friends of Semel Institute, Batyr, La Familia, Mindful Life Project, Black Teacher Project, and Trans Lifeline. According to the website, they have raised $6 million in contributions and distributed $3 million in grant support so far.
Rare Beauty and the Rare Impact Fund alone are blazing a trail for all brands: you can make a change while still distributing high-quality products — and it pays off.
Contrary to popular belief, there is no hate speech exception to the First Amendment.
The social networking site Gab has been taken offline since it was confirmed that the Pittsburgh synagogue gunman used it to post anti-Semitic hate speech and to threaten Jews. The site is popular with the far right and describes itself as "an ad-free social network for creators who believe in free speech, individual liberty, and the free flow of information online. All are welcome." Gab was originally created by conservative businessman Andrew Torba in response to Twitter clamping down on hate speech in 2016.
Robert Bowers logged onto the platform shortly before killing 11 people at the Tree of Life synagogue on Saturday to post the following.
Consequently, the site has been abandoned by payment processing firms PayPal and Stripe, as well as hosting service Joyent and domain register GoDaddy. A statement on Gab's website Monday read that the platform would be "inaccessible for a period of time" as it switches to a new web host. It said the issue was being worked on "around the clock." The statement went on to defend the website, saying, "We have been systematically no-platformed [and] smeared by the mainstream media for defending free expression and individual liberty for all people."
Regarding Bowers' use of the site, Torba said, "Because he was on Gab, law enforcement now have definitive evidence for a motive," Mr. Torba wrote. "They would not have had this evidence without Gab. We are proud to work with and support law enforcement in order to bring justice to this alleged terrorist."
But companies associated with Gab were not satisfied by the site's cooperation with law enforcement and continue to abandon the site. PayPal, the platform Gab used to manage donations from users, said in a statement, "When a site is explicitly allowing the perpetuation of hate, violence or discriminatory intolerance, we take immediate and decisive action."
A tweet from Gab on Monday morning implied that the people behind the site believe themselves to be a victim of intentional defamation.
Set aside the questionable intent of the decidedly tone-deaf tweet; and, legally, Gab did not do anything wrong. Contrary to popular belief, there is no hate speech exception to the First Amendment. The Supreme Court reaffirmed this in 2017 in Matal vs. Tal, deciding, "Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful...the proudest boast of our free speech jurisprudence is that we protect the freedom to express 'the thought that we hate.'" Despite this, many people are calling for the permanent removal of the site, as Wired points out, "Momentary political rage can blind people into abandoning sacred values."
However, the internet inarguably contributes to the creation of extremists, as we have seen in the case of terrorists, rapists, school shooters, and now the synagogue shooter in Pittsburgh. Sites like Gab allows users to easily find other people who share their most extreme viewpoints, inevitably normalizing disturbing rhetoric the user may have otherwise suppressed or self-corrected in time. Therefore, sites like Gab become polarizing spaces that can help to sew the kinds of ideas that lead to violent acts. But, if there's no legal action to be taken against a site like Gab without damaging free speech, what can be done?
Justice Anthony Kennedy said in his opinion following Matal vs. Tal, "A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. The First Amendment does not entrust that power to the government's benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society."
While what exactly those safeguards are remains unclear, one can speculate that what Kennedy meant is exactly what Gab calling unjust now. As previously mentioned, the site has been abandoned by all of the companies whose services were needed for the site to remain online. And just as Gab has the right to allow freedom of expression on their site as they see fit, these companies are also free to express themselves in refusing to work with websites that allow hateful rhetoric.
Indeed, the conversation surrounding the fate of Gab has revealed that freedom of speech online is not decided by the government, but by social media platforms, servers, and domain registers who are free to decide with what kind of opinion their company wants to be associated. This also means that, on some level, what is seen as acceptable online is driven by consumer outrage and approval.
For example, after facing criticism for allowing users to post prejudiced content, larger social networking sites like Twitter and Facebook have been actively fighting against hateful rhetoric with varying degrees of success. In 2016, a code of conduct was established by the European Union in collaboration with Facebook, Twitter, YouTube, and Microsoft. The code is aimed at fighting racism and xenophobia and encourages the social media companies to remove hate speech from their platforms.
So, instead of outraged Americans calling for the legal suppression of sites like Gab — an impossibility if the First Amendment is to remain intact — the real power of the individual to fight hate speech is in one's ability to support or boycott companies based on how they handle free expression.
Brooke Ivey Johnson is a Brooklyn based writer, playwright, and human woman. To read more of her work visit her blog or follow her twitter @BrookeIJohnson.
700,000 Muslims were forced to flee to neighboring Bangladesh in 2017.
On Monday, Facebook said it removed 13 pages and 10 accounts controlled by the Myanmar military in connection with the Rohingya refugee crisis.
The accounts were masquerading as independent entertainment, beauty, and information pages, such as Burmese popstars, wounded war heroes, and "Young Female Teachers." Fake postings reached 1.35 million followers, spreading anti-Muslim messages to social media users across the Buddhist-majority country.
Facebook's move comes a year after 700,000 Rohingya, a Muslim minority group in Myanmar, were forced to flee to neighboring Bangladesh amid widely-documented acts of mob violence and rape perpetrated by Myanmar soldiers and Buddhist mobs. The United Nations Human Rights Council denounced the crisis as "a textbook case of ethnic cleansing and possibly even genocide."
Rohingya children rummaging through the ruins of a village market that was set on fire.Reuters
Last month, the social media giant announced a similar purge, removing Facebook and Instagram accounts followed by a whopping 12 million users. Senior General Min Aung Hlaing, commander-in-chief of the Myanmar armed forces, was banned from the platform, as was the military's Myawady television network.
Over the last few years, Facebook has been in the hot seat for their tendency to spread misinformation. In the 2016 U.S. presidential election, inauthentic Facebook accounts run by Russian hackers created 80,000 posts that reached 126 million Americans through liking, sharing, and following. This problem has persisted in the 2018 midterm elections, ahead of which 559 pages were removed that broke the company's policies against spreading spam and coordinated influence efforts. Recent campaigns originating in Iran and Russia target not only the U.S., but also Latin America, the U.K., and the Middle East.
The situation in Myanmar is particularly troubling—it's not an effort by foreign powers to stoke hate and prejudice in a rival, but rather an authoritarian government using social media to control its own people. According to the New York Times, the military Facebook operation began several years ago with as many as 700 people working on the project.
Screen shots from the account of the Myanmar Senior General Min Aung Hlaing, whose pages were removed in August.
Claiming to show evidence of conflict in Myanmar's Rakhine State in the 1940s, the images are in fact from Bangladesh's war for independence from Pakistan in 1971.
Fake pages of pop stars and national heroes would be used to distribute shocking photos, false stories, and provocative posts aimed at the country's Muslim population. They often posted photos of corpses from made-up massacres committed by the Rohingya, or spread rumors about people who were potential threats to the government, such as Nobel laureate Daw Aung San Suu Kyi, to hurt their credibility. On the anniversary of September 11, 2001, fake news sites and celebrity fan pages sent warnings through Facebook Messenger to both Muslim and Buddhist groups that an attack from the other side was impending.
Facebook admitted to being "too slow to prevent misinformation and hate" on its sites. To prevent misuse in the future, they plan on investing heavily in artificial intelligence to proactively flag abusive posts, making reporting tools easier and more intuitive for users, and continuing education campaigns in Myanmar to introduce tips on recognizing false news.
The company called the work they are doing to identify and remove the misleading network of accounts in the country as "some of the most important work being done [here]."
Sites like Facebook will have more and more influence over our elections in the future.
America's favorite uncorroborated news story of the moment is that the Russian government masterminded Trump's rise to power. It's easy to understand why. Introspection after a loss is difficult, and rather than face themselves, the DNC decided to have a seance, evoking a Cold War ghost to explain their defeat. It's somewhat comforting to assume an international conspiracy was behind the Hillary Clinton's failure in the 2016 election. It absolves the DNC of any responsibility to change their conduct or adjust their political strategy. That said, there is no hard evidence of collusion, but rather a string of awkward encounters by Trump's largely inexperienced, and frankly stupid, staff. The meat of Russia's "interference" came in the form of social media bots, fake accounts that would automatically repost sensationalist headlines to drum up support for Trump. These accounts are pretty easy to spot however, as they don't even come close to passing a turing test.
Blaming Russia is too easy
Still, the creation of Russia's bot army had to be predicated on some form of information, and many have accused Putin's government of tracking users' Facebook data in an attempt to gain a psychological understanding of the average American voter. This is where Aleksandr Kogan comes into play. Kogan sold the data of some 87 million Facebook users (collected via a quiz app) to Cambridge Analytica, a political consulting firm hired by the Trump campaign. Cambridge Analytica's goal was to create psychographic voting profiles. While there's no definitive connection between Cambridge Analytica and Russia, the precedent set by CA and their illegal exploitation of Facebook is a frightening one. If a private company is collecting data on citizens, it's a pretty safe bet that governments around the world are doing the same. While the Democratic Party's Russophobia is definitely a reaction to losing in 2016 more than anything else, but it accidentally shed light on an important issue: our data isn't safe, and with recent improvements AI and voice recognition software, we'll soon have the technology to not only create comprehensive individual psych profiles, but to tailor campaigns to individual voters.Obviously companies like Google and Facebook have large stores of internal data, and they've certainly been amenable to selling it, but academic researchers (like Kogan) also have large data caches. Behavioral psychologists use Facebook in studies all the time, and the academic world isn't particularly well-known for its cyber security. Even in the event that these databases aren't hacked, there's nothing to prevent a researcher from selling their findings after their study is complete. The quick fix is to let Facebook block third parties from collecting data on its users, and for its part, Facebook has done just that. They've begun blocking apps from collecting information, and have also limited the number of researchers allowed to look at data on the site. Only academics researching political elections through the lens of social media are permitted to apply for access to Facebook's database.
At a glance, these robust safety measures are a breath of fresh air. It isn't often that a tech company is so committed to its customers' privacy. That said, when things look too good to be true, they usually are. If Facebook continues its path to prohibition, "only Facebook will really know very much about how Facebook actually operates and how people act on Facebook," warns Dr. Rasmus Kleis Nielsen of Oxford University. Sure, measures like these could protect data from outsiders, but it would also give a private company sole proprietorship over the most comprehensive database of human behaviors and tendencies ever created. Facebook would have even more sway over our local and national elections than it already does, and would gain a monopoly over 2 billion people's personal data. Essentially, Facebook could name its price. Because of the way the Internet works, there's no way to effectively protect our Facebook data without severely compromising our freedom. And even if we were to let Zuckerberg shut everyone out of Facebook's data vaults, this doesn't prevent other websites or services from collecting information on us. It doesn't make us any safer. Our sensitive information is freely available to anyone who knows how to access it.
As technology improves, it's going to become more and more difficult to tell what is and isn't fake news–whether or not that article you just read was an advertisement for Tide or some political campaign you weren't aware of. For better or worse, we've set out to map the entire spectrum of human behaviors. Eventually, marketing campaigns will be so advanced, so accurate in their mapping of our desires, we may forget that we ever had the capacity to think. Somewhere, the ghost of B.F. Skinner smiling.
Even though some Millennials are almost forty, people are still bashing them.
Last year the New York Post ran an article about Millennials making up the largest portion of the American workforce, ignoring a glaringly obvious point: of course 22-37 year olds are the largest portion of the labor market; they're adults. In an effort to make a distinctly un-newsworthy article newsworthy, the Post settled on an old trope, pick on the Millennials. For its part, this article wasn't as bad as most. The author refrained from using words like "entitled" and "coddled" and "irresponsible," but there's still a certain connotation attached to the term Millennial, particularly in the way it pertains to work ethic and maturity. Repudiating a stereotype often doesn't have the desired effect; in fact, it has a tendency of validating the stereotyper.* That said, my editor's asked me to dissect the maelstrom of insults and unfair generalizations that surround my generation, so here it goes.
In order to parse the general themes of Millennial bashing from the tsunami of bull shit that's been thrown our way, it's important to acknowledge how it all started. In many ways, a lot of the Millennial-centric ire feels natural. Baby Boomers hated Gen Xers. WWII Vets were critical of Boomers. There's always been something decidedly adversarial about the relationship between a young generation and their parents. This is fine. It's one of the many growing pains associated with being a young professional. The strange thing is how long this anti-Millennial sentiment has lasted. Complaints about young folks usually stop before those young folks are forty.
One theory about Millennial bashing's longevity is that it's a symptom of the economic anxiety created by the financial crisis of 2008. Parents had already been lambasting Millennials for being entitled and not wanting to sacrifice their twenties to careers paths they weren't interested in and didn't respect. Boomers were more concerned with being pragmatic, while Millennials wanted to find meaning in their work. Naturally, this caused friction. Still, there was nothing out of the ordinary. At this point, Millennials were college and high school students.
We're just gathering around a single computer monitor to check out some sweet graphs. You know, millennial stuff.
Following the Great Recession, however, this friction was compounded, as Boomers and Gen Xers everywhere lost pensions and 401ks, and their supposedly 'safe' jobs went up in smoke. When slapped in the face by reality, Boomers realized that sacrificing the best years of their lives to jobs they hated yielded very few tangible results. They were understandably upset. There's plenty of pop psychology out there that'll tell you people hate being wrong, but when by virtue of being wrong, their entire life is called into question, something interesting happens. Back in the 50s, a study was done on a doomsday cult in Chicago. The cult predicted that a massive flood would destroy the West Coast of the United States and that flying saucers would rescue the chosen believers before the cataclysm struck. Obviously, it never happened. Strangely, after the prophecy failed, rather than admitting they'd been duped, folks in the cult doubled down on their beliefs, assuming that their prayers had been answered by God and that he decided to spare the planet on their behalf.
Applying similar logic, Boomers, rather than admitting that the system they'd bought into wasn't really looking out for their best interest, doubled down, intensifying their rhetoric against lazy and entitled Millennials. Inasmuch as all invectives are projections of a speaker's insecurities, Boomers and Gen Xers are really saying one of two things when they blindly lash out. One: they made the wrong choices when they were young, and feel they missed The Road Not Taken. Two: they feel that they didn't work hard enough to inoculate themselves from the effects of our failing economy. The former is sad. The latter is terrifying.
Millennials don't care about dressing up for work.
Another way to look at this issue is via the lens of corporate America. As pointed out by Tucker Max**, the corporate formula is simple: sacrifice youth in exchange for status and financial security. The problem is, status is only worthwhile if people believe in the power structures it's attached to. Money certainly still commands respect, but middle managers aren't exactly rolling in it. With this in mind, it's easy to look at Boomers' Millennial fixation as an obsession with preserving the status quo (pun intended). In their world, being respected can feel like the end all be all of adult life. If Millennials don't buy into the existing systems of power, then the prestige Boomers have strived for is meaningless.
There's always a disconnect between generations, but the way in which Millennials have been used as scapegoats for economic issues is beyond the pale. Many of us own homes and have families already. Some of us are prominent business owners. If 1996 is a strict cutoff, then this coming school year will be the last college graduating class primarily comprised of Millennials. We're adults, in every sense of the word. Still, the stereotypes attached to Millennials have persisted, and while I've discussed the hows and whys, I haven't directly addressed the crimes my generation is accused of.
Here's a shortlist of refutations:
-Millennials are not as addicted to their phones as Boomers and Gen Xers.
-Millennials do not want participation trophies. Those were invented to coddle and reassure parents that their children are special. I have a box of them at home. They mean nothing to me.
-Every generation since the Boomers has been called "The Me Generation."
-Millennials aren't stupid. They're the most educated generation ever. Full stop.
-Millennials aren't lazy or entitled. We just won't work for less than what we're worth. Anyone who thinks refusing to work for free is an entitlement, has no spine.
-Our debt is not due to a lack of fiscal responsibility. Boomers destroyed the economy, and we're shouldering 1.2 trillion dollars in student debt because we were taught that higher education is a prerequisite to success in this country.
* This is because discourse is predicated on the idea that each side of an argument has merit. A potential side effect of debate, however, is the creation of a neutral center, a nebulous region in which values from either end of the discussion are combined and redefined ad infinitum. Often, the center is painted as the domain of the rational thinker, the one who can clearly see both sides of an argument. The problem is, in a debate with, say, a neo-nazi, the center must by this definition, at least partially endorse certain ethno-fascist ideals. In this way, the creation of an ideological middle ground always benefits the more radical opinion.
**Listen...I know. I didn't see the author until after I read the article, but it makes some pretty good points. Yes, his books are still bad.
Matt Clibanoff is a writer and editor based in New York City who covers music, politics, sports and pop culture. His editorial work can be found in Inked Magazine, Popdust, The Liberty Project, and All Things Go. His fiction has been published in Forth Magazine. -- Find Matt at his website and on Twitter: @mattclibanoff
Are you interacting with a real person, or an automated program? Sometimes, it's hard to tell
For years, science fiction writers have been telling us robots are going to take over the world. It turns out they were right.
But, it's we humans who are doing the androids' dirty work. Unless you've been living in a cabin deep in the woods without the internet (and if so, do you have an extra bunk?), you are probably familiar with the scourge of "Bots," even if you don't recognize the invasion. Bots, short for "robots," are automated programs that run over the internet. On social media, bots have made their presence felt through a wave of fake accounts posing as real people, some 48 million on Twitter alone.
Spotting a bot can be tricky.
Many of the accounts look and feel like real people, but it's worth taking the time and effort to weed out the phonies. Fake social accounts can do real harm through the spreading of misinformation. It's important to be able to recognize and eliminate bots, because according to computer scientist Chengcheng Shao who said to MIT Technology Review, "Social bots play a key role in the spread of fake news."
Perhaps you recall that during the lead-up to the 2016 presidential election, Pope Francis shocked the world by endorsing Donald Trump. It was the top fake news story of 2016 and earned nearly a million Facebook engagements. Fake news works, it always has, and it isn't going anywhere. Which is why last January, Pope Francis said we all need to recognize the "snake-tactics" of the "crafty serpent" that go all the way back to the Book of Genesis. When the Pontiff himself declares, "the truth will set you free," it's time to identify and eliminate the bots in our human lives.
Pope Francis addresses the scourge of fake news at the Vatican https://goo.gl/uz6gFJ
Unfortunately, there isn't a single characteristic to help spot a bot, but there are broad identifiable patterns.
First and foremost, no matter the social media site, ask yourself a simple question when you see a post from someone you don't know personally (or more likely a post from someone you do know, reposting some "person" you don't):
Does this account seem like a real human being? Common sense is on your side. Use it.
Let's start with Twitter, where the homepage of a user can tell a lot. If the bio reads like something Rosey from The Jetsons would spit out, it's a flashing sign of online garbage. Real people write real bios. Is the avatar the default silhouette? Is the Twitter handle gobbledygook no human would choose? Do they post all the time, morning, noon, and night? People sleep, bots not so much. Or conversely, do they only retweet and repost, often to multiple accounts? This isn't how humans engage on social media.
These are the obvious ones, but of course, the bot factories are a lot more complex.
Here are a few more telltale Twitter questions to ask: Does the account follow a ton of people with few followers of its own? Has it followed and unfollowed you in a short amount of time? (Google "Who Unfollowed Me on Twitter" for a simple link), Did you get a reply in microseconds? Do the comments, retweets, and reposts appear to be from other bots? Does the Tweet originate not from the web or mobile, but "from API," which is often automated? Are there multiple posts about breaking news within minutes? (The Parkland shooting unleashed a near-instantaneous torrent of Russian bots.) Does the account interact with friends and foes in the same way you do? If not, chances are, it's a bot. Another trick if you're still stumped is to take the avatar photo and reverse image it. If it pops up all over the web, it's probably a stock photo, which isn't necessarily rock solid--mine is currently of legendary Philadelphia Eagles QB Nick Foles--but it's fairly obvious if it matches up with the other giveaway signs. Two other helpful tools to verify Twitter accounts are Botometer and Botchek.
Facebook has one simple built-in bot eliminator, which is that you have to accept invitations from others.
Use the old childhood axiom of not taking candy from strangers. If you don't know the person, or you have no mutual friends, ignore the request. You can also look up the account to make sure.
Other Facebook warning signs include an older pre-Timeline layout, an attractive female model as the profile picture (men are such easy marks), an empty wall or one with few personal updates and responses, or an enormous amount of "likes" that have seemingly zero in common. Again, does the account look like yours? The Facebook Help Center has a tool to show what bots you followed, but it's limited to bots you directly followed, not your friends, and it doesn't go that far back. Consider it a starting point.
But what about bots on other social platforms?
The New York Times just ran a fascinating story on combatting Instagram bots, which pointed out the most obvious ones, those following 7,500 accounts, the maximum allowed, without a single posted photo or profile picture. Other fishy things to look for are: Accounts with a lot of followers, say 25,000, and little engagement, say two likes, on a photo, an account with a giant inorganic spike in followers, or an account following many more on the 'Gram than it has followers.
Beware of instagram accounts with thousands of followers, and no picturesPhoto by Ben Kolde
Spotting and blocking the fraudulent is key to a healthy social media existence.
But remember, bots aren't the biggest problem. We are. A new study by three MIT scholars found that on Twitter, most fake news is spread by humans, at a speedier rate, and at a much higher volume. Fake news stories are 70% more likely to be spread than actual news stories and reach 1,500 people six times faster. Why? The scholars theorize it's a combination of human impulses. Fake stories, often with insane too-good-to-be-true headlines, seem novel, so we share them to be "in the know." Bullcrap also triggers "surprise and disgust," whereas accurate stories engender sadness, anticipation, and (gasp) trust.
You can train yourself to spot a bot, but it's not enough. Check yourself first. Otherwise, when the robots do officially take over, we'll only have ourselves to blame.