Dall-E Mini, the AI-powered text-to-image generator has taken over the internet. With its ability to render nearly anything your meme-loving heart desires, anyone can make their dreams come true.
DALL-E 2, a portmanteau of Salvador Dali, the surrealist and Wall-E, the Pixar robot, was created by OpenAI and is not widely available; it creates far cleaner imagery and was recently used to launch Cosmpolitan’s first AI-generated cover. The art world has been one of the first industries to truly embrace AI.
The open-sourced miniature version is what’s responsible for the memes. Programmer Boris Dayma wants to make AI more accessible; he built the Dall-E Mini program as part of a competition held by Google and an AI community called Hugging Face.
And with great technology, comes great memes. Typing a short phrase into Dall-E Mini will manifest 9 different amalgamations, theoretically shaping into reality the strange images you’ve conjured. Its popularity leads to too much traffic, often resulting in an error that can be fixed by refreshing the page or trying again later.
If you want to be a part of the creation of AI-powered engines, it all starts with code. CodeAcademy explains that Dall-E Mini is a seq2seq model, “typically used in natural language processing (NLP) for things like translation and conversational modeling.” CodeAcademy’s Text Generation course will teach you how to utilize seq2seq, but they also offer opportunities to learn 14+ coding languages at your own pace.
You can choose the Machine Learning Specialist career path if you want to become a Data Scientist who develops these types of programs, but you can also choose courses by language, subject (what is cybersecurity?) or even skill - build a website with HTML, CSS, and more.
CodeAcademy offers many classes for free as well as a free trial; it’s an invaluable resource for giving people of all experience levels the fundamentals they need to build the world they want to see.
As for Dall-E Mini, while some have opted to create beauty, most have opted for memes. Here are some of the internet’s favorites:
no fuck every other dall-e image ive made this one is the best yet pic.twitter.com/iuFNm4UTUM
— bri (@takoyamas) June 10, 2022
There’s no looking back now, not once you’ve seen Pugachu; artificial intelligence is here to stay.
Zoom offers us the opportunity to connect with each other, but at a price.
Digital platforms are becoming vehicles for new ways of life during quarantine, offering new methods of loving and working and connecting with people who we can no longer see in person.
In particular, the video platform known as Zoom is quickly becoming integral to many of our quarantine routines. But where did Zoom come from, and is it helping us—or hurting us?
Like everything on the Internet, it's most likely doing a little bit of both. And like everything on the Internet, there's a lot going on behind the scenes.
Zoom was founded in 2011 by Eric Yuan. As of 2019, the company is valued at $16 billion. In the era of quarantine, most classes and meetings have been shifted over to its servers. Family dinners, dates, classes, revolution-planning sessions, intimate events—all have been shifted over to the online world.
Zoom is an oddly universal experience for many of us in quarantine, a strange third party in between our solitude and the buzzing outside world we used to know. A whole social culture is cropping up around it, filling the void left by concerts and bars—things like "Zoom Parties" and "Zoom Dates" are becoming common social activities, and as with all phenomena of the digital age, it's generated a fair amount of memes. ("Zoom Memes for Self Quaranteens" is just one of the Facebook groups dedicated to the strange parallel reality that is a Zoom video call).
There are fortunate aspects of Zoom. It's allowed businesses to continue and has kept families connected across continents; it's broadcasted weddings and concerts and rallies. Video conferencing services also vitally allow differently-abled people to participate in events they couldn't have attended otherwise.
But there's a darker undercurrent to all this Zooming. The problem with the Internet is that nothing is ever really private. In some part of our mind, we must know this. We know that everything we post on the Internet will live somewhere forever, impossible to scrub away. We know that the firewalls intended to block hackers are just as thin as a few lines of code, as easy to break as a windowpane. Yet now, many of us have no choice but to burn our data at the altar of the web, and to face the consequences that may come.
Zoom-Bombers Inundate Synagogues, Classrooms, and Meditations with Racist Torments
In March, a Massachusetts teacher was in the middle of a Zoom class when suddenly she found her lesson interrupted by a string of vile profanities. The interloper also shouted her home address. In another Massachusetts-based case, another trespasser invaded a Zoom meeting and displayed swastika tattoos on camera.
Cases of so-called "Zoom-bombing" have grown so common—and the attacks are sometimes so heinous and malicious—that the FBI's Boston Division issued a formal warning on March 30, 2020. "As large numbers of people turn to video-teleconferencing (VTC) platforms to stay connected in the wake of the COVID-19 crisis, reports of VTC hijacking (also called 'Zoom-bombing') are emerging nationwide," their website reads. "The FBI has received multiple reports of conferences being disrupted by pornographic and/or hate images and threatening language."
This phenomenon is certainly not relegated to Massachusetts. "Zoom-bombing" (or invading a Zoom call) may seem like just another form of trolling, but so far, many Zoom-bombers have delivered racist, hateful invectives during their intrusions onto video calls.
Also near the end of March, an online synagogue service was hijacked by racist accounts, who filled the Zoom group chat with "vile abuse" and anti-Semitic sentiments, according to the rabbi.
"It is deeply upsetting that at such a difficult period we are faced with additional challenges like these. We will be keeping the security of our online provision under review through the weeks ahead," stated the rabbi.
According to the Los Angeles Times, some University of Southern California classrooms have been Zoom-bombed with "racist taunts and porn." The stories go on and on. Morgan Elise Johnson's virtual morning meditation sessions were interrupted with graphic sexual material, which devolved into racist taunts when she tried to mute the hackers.
"I just exited out right away," Johnson said. "For it to be at a moment where we were seeking community and seeking collective calm … it really cut through my spirit and affected me in a very visceral way." Johnson's digital media company, the Triibe, has moved its meditations to Instagram live.
You can find plenty of additional examples of Zoom-bombs on TikTok and YouTube, as many gleeful hackers have uploaded footage of themselves interrupting various meetings. And if you're so inclined, you yourself can easily find information about how to hack Zoom calls yourself. On April 2, ZDNet reported that there are now 30 Discord channels related to Zoom hacking, at least three subreddits—two of which have been banned—and many Twitter accounts broadcasting Zoom codes.
If you're tracing Zoom-bombing to its origins, the online text and voice messaging server Discord is the place where it all began. Many of the hackers are less than subtle, often posting requests for hacks. "Can anybody troll my science class at 9 15," wrote one user in one of Discord's forums, according to pcmag.com. Another user linked to a since-deleted YouTube video that showed someone sharing photos of the Ku Klux Klan to an Alcoholics Anonymous Zoom meeting. The Discord forums are also full of information about hacking Facebook livestreams and Google Hangouts calls.
While many online hackers are seeking lucrative gains like credit card numbers, Zoom hackers seem to be motivated purely by a desire for mischief and chaos–as well as racism. Essentially, they're Internet trolls, AKA the scourge of the virtual earth. But the maliciousness of these racist attacks, which are hate speech through-and-through, can't be underestimated or taken lightly.
How to Protect Your Zoom Account
There are ways to protect yourself against the trolls. pcmag.com recommends creating your own individual meeting ID rather than using the one Zoom assigns to you. They also recommend creating a waiting room so the meeting organizer can choose who to let in, an option that can be found under "account settings," and requiring a password when scheduling new meetings, which can also be found in the account settings under "Schedule Meeting." When all participants arrive, you can "lock" the meeting by clicking "Participants" at the bottom of the screen window.
In addition, The Verge recommends that users disable the screen-sharing feature. Other sites advise turning "attention tracking" off—which can prevent call organizers from seeing whether you're looking at other tabs during your meeting—and you can use a virtual background to disguise your living space.
The Anti-Defamation League has synthesized all this into a list of handy tips, all of which are explained in detail on its website.
Before the meeting:
- — Disable autosaving chats
- — Disable file transfer
- — Disable screen sharing for non-hosts
- — Disable remote control
- — Disable annotations
- — Use per-meeting ID, not personal ID
- — Disable "Join Before Host"
- — Enable "Waiting Room"
During the meeting:
- — Assign at least two co-hosts
- — Mute all participants
- — Lock the meeting, if all attendees are present
If you are Zoom bombed:
- — Remove problematic users and disable their ability to rejoin when asked
- — Lock the meeting to prevent additional Zoom bombing
Zoom itself has faced scrutiny and questions for quite a while, and its problems extend beyond a vulnerability to hacks. "Things you just would like to have in a chat and video application — strong encryption, strong privacy controls, strong security — just seem to be completely missing," said Patrick Wardle, a security researcher who previously worked at the National Security Agency, per NPR. Wardle discovered a flaw in Zoom's code that could allow hackers to watch video participants through a webcam, a problem Zoom says it has fixed.
And of course, like most big tech companies, Zoom is in cahoots to the evil mastermind behind tech's greatest data suck—Facebook.
Zoom Has Been Quietly Sharing Data with Facebook
Many websites use Facebook's software to enhance and facilitate their own programs. For its part, Zoom immediately connects to Facebook's Graph API app the moment it opens, which immediately places data in the hands of Facebook's developers, according to Motherboard. The app lets Facebook know the time, date, and locations of every zoom call and additional device information. The problem is that Zoom never asked its users for their permission to sell data to Mark Zuckerberg.
Following the Motherboard report, a California Zoom user sued the company for failing to "properly safeguard the personal information of the increasing millions of users," arguing that Zoom is violating California's Unfair Competition Law, Consumers Legal Remedies Act and the Consumer Privacy Act. "The unique advertising identifier allows companies to target the user with advertisements," reads the lawsuit. "This information is sent to Facebook by Zoom regardless of whether the user has an account with Facebook."
In response, Zoom CEO Eric Yuan published a blog post in which he stated, "Our customers' privacy is incredibly important to us, and therefore we decided to remove the Facebook SDK in our iOS client and have reconfigured the feature so that users will still be able to log in with Facebook via their browser." Still, users are questioning whether one CEO's word is enough.
There are other eerie realities about Zoom. For example, call organizers have more power than we think. Using certain settings (which can all be turned off), hosts can read their call participants' messages and can see whether attendees clicked away from the call.
Based on its own technical white paper, Zoom falsely marketed one of its features as making meetings "end-to-end en… https://t.co/Lh1KLYBidT— WIRED (@WIRED) 1585936146.0
The contents of thousands of video calls made on the app Zoom were exposed on the open web, and easily available vi… https://t.co/5s1QYRanG2— Xeni Jardin, #stayathome (@Xeni Jardin, #stayathome) 1585933927.0
Certainly in the future, more dark truths will emerge about Zoom. For now, many tech companies are advising users not to say or do anything on Zoom that they wouldn't want to be broadcast to the public. Of course, Zoom is probably no more or less safe than the rest of the Internet, a thought that could be comforting or nightmarish depending on what you've been up to online.
A Much Larger Problem: The Internet Is Not a Safe Space
The prospect of a Zoom hack or data breach is scary, and it's absolutely not what any of us want to worry about during these unstable quarantined times. Yet it also reveals a dark truth about the Internet: nothing is safe online—and pretty much everything we post is being used to sell us things. Unless you're a super-famous person or the unlucky target of a bored troll, you should be less afraid that someone will go through your Google search history and more afraid of what's happening to all the times you put in your home address and email on a website's pop-up questionnaire.
Zoom's encryption may be shoddy, but it's certainly not the only site that's selling your data to Google, Facebook, and their cohort of advertising companies. In fact, every ad you see is based on your web search history, demographics, location, and other factors individual to you.
Data—which includes but is not limited to the numbers, emails, and information you put in on all those online forms—is one of the most valuable commodities of the Internet age. Every time we do anything online, or carry our phones anywhere, someone is collecting data, and typically that data is filtered into massive data-collection artificial intelligence programs that synthesize data and use it to sell products.
"We are already becoming tiny chips inside a giant system that nobody really understands," writes historian Yuval Noah Harari. "Every day I absorb countless data bits through emails, phone calls and articles; process the data; and transmit back new bits through more emails, phone calls and articles."
And what will happen to all this data? It's hard to say—but conspiracies are thriving. "Dataists believe in the invisible hand of the dataflow. As the global data-processing system becomes all-knowing and all-powerful, so connecting to the system becomes the source of all meaning," adds Harari. "Dataists further believe that given enough biometric data and computing power, this all-encompassing system could understand humans much better than we understand ourselves. Once that happens, humans will lose their authority, and humanist practices such as democratic elections will become as obsolete as rain dances and flint knives."
That might be extreme, but perhaps not. During this coronavirus quarantine, we will all be pouring much more information onto the Internet than ever before, providing the Internet with intimate knowledge of our every move—more than enough to create pretty accurate digital representations of our own minds. Whoever finds out the best way to harness and sell this data (be it an AI program, a human, or Elon Musk and Grimes' cyborgian child) may just be our new god.
8 ways to protect yourself, right now
When I was studying in China, the other kids and I always freaked out when we were doing something illicit, like entertaining a Chinese friend or using an electric tea kettle, and the dorm attendant came knocking at the door. Clearly we were being surveilled. Over time, one of the things we grew to appreciate about the United States was our individual privacy. Obviously, since then, what seemed like an inviolable right has been casually thrown away like a pile of old VHS tapes. Where I once cherished my privacy, now I might as well be sprawled naked on the pavement in Times Square surrounded by my open passport, credit cards, bank statements, and diaries.
The Internet is an incredible tool, but it appeals to some of our worst tendencies: sloth, addiction, prurience. We love it because it's free, although of course, we're all paying a huge price. Even after debacles like Yahoo exposing the data of every single one of it's users, three billion in all, or the Cambridge Analytica-Facebook scandal, how many people actually deleted any accounts, changed their privacy settings or read the epic and stultifying privacy agreements on social media? In the United States, what business theorist Shoshanna Zuboff terms "surveillance capitalism" was allowed to develop largely unregulated, allowing companies, in particular Google and Facebook, who rely on mining personal data for revenue to become, according to the New York Times, an "emerging duopoly that today controls more than half of the worldwide market in online advertising."
This spring, the European Union enacted the General Data Protection Regulation, a sweeping law that requires companies use the highest possible privacy settings and disclose any type of personal data they are collecting. In June, California followed suit with its own Consumer Internet Privacy Act of 2018, the most robust in the nation. And federal regulations? Remember back in 2017—I know that seems like the Dark Ages with the current breakneck news cycle—when President Trump signed a repeal of an Obama-era law which, under the FCC, would have required broadband companies to get permission from their customers when they were collecting "sensitive data" such as browsing history and geolocation? In late July, the Commerce Department "began holding stakeholder meetings to identify common ground and formulate core, high-level principles on data privacy," according to a senior official speaking to Reuters. In other words, don't hold your breath waiting for federal legislation.
Even California's law doesn't go into effect until 2020. What can you do right now to protect your privacy? Here are some steps you can complete in under an hour that will beef up your computer or phone's security:
1. Turn off location tracking for all of your Apps. You can turn them on selectively when you need them (such as with Uber).
2. Install automatic updates. This way your software will have the latest security features.
3. Cover your webcam with a piece of tape or post it like Mark Zuckerberg does. We know he's an expert on shady ways to collect personal information.
4. Use a password on every computer and gadget, not just your phone. And make it at least six characters long and strong. 123456 or your birthday will simply not do.
5. Put your social media accounts on lockdown. Check your privacy settings. Don't make everything public. Share only within a verifiable group of friends.
6. Avoid using public wifi connections. They can be convenient but the information you transmit is not secure.
7. Don't give away personal information that you don't have to. Phone number? Address? Birthdate? Nope. Facebook does not need to know.
8. Delete your search history regularly. This is critical if you use shared computers such as at school or in a library.
Consumer Reports has a useful list of nearly 70 other steps you can take to protect your security and privacy.
Have more tips? Tweet us at The Liberty Project.
Sites like Facebook will have more and more influence over our elections in the future.
America's favorite uncorroborated news story of the moment is that the Russian government masterminded Trump's rise to power. It's easy to understand why. Introspection after a loss is difficult, and rather than face themselves, the DNC decided to have a seance, evoking a Cold War ghost to explain their defeat. It's somewhat comforting to assume an international conspiracy was behind the Hillary Clinton's failure in the 2016 election. It absolves the DNC of any responsibility to change their conduct or adjust their political strategy. That said, there is no hard evidence of collusion, but rather a string of awkward encounters by Trump's largely inexperienced, and frankly stupid, staff. The meat of Russia's "interference" came in the form of social media bots, fake accounts that would automatically repost sensationalist headlines to drum up support for Trump. These accounts are pretty easy to spot however, as they don't even come close to passing a turing test.
Blaming Russia is too easy
Still, the creation of Russia's bot army had to be predicated on some form of information, and many have accused Putin's government of tracking users' Facebook data in an attempt to gain a psychological understanding of the average American voter. This is where Aleksandr Kogan comes into play. Kogan sold the data of some 87 million Facebook users (collected via a quiz app) to Cambridge Analytica, a political consulting firm hired by the Trump campaign. Cambridge Analytica's goal was to create psychographic voting profiles. While there's no definitive connection between Cambridge Analytica and Russia, the precedent set by CA and their illegal exploitation of Facebook is a frightening one. If a private company is collecting data on citizens, it's a pretty safe bet that governments around the world are doing the same. While the Democratic Party's Russophobia is definitely a reaction to losing in 2016 more than anything else, but it accidentally shed light on an important issue: our data isn't safe, and with recent improvements AI and voice recognition software, we'll soon have the technology to not only create comprehensive individual psych profiles, but to tailor campaigns to individual voters.Obviously companies like Google and Facebook have large stores of internal data, and they've certainly been amenable to selling it, but academic researchers (like Kogan) also have large data caches. Behavioral psychologists use Facebook in studies all the time, and the academic world isn't particularly well-known for its cyber security. Even in the event that these databases aren't hacked, there's nothing to prevent a researcher from selling their findings after their study is complete. The quick fix is to let Facebook block third parties from collecting data on its users, and for its part, Facebook has done just that. They've begun blocking apps from collecting information, and have also limited the number of researchers allowed to look at data on the site. Only academics researching political elections through the lens of social media are permitted to apply for access to Facebook's database.
At a glance, these robust safety measures are a breath of fresh air. It isn't often that a tech company is so committed to its customers' privacy. That said, when things look too good to be true, they usually are. If Facebook continues its path to prohibition, "only Facebook will really know very much about how Facebook actually operates and how people act on Facebook," warns Dr. Rasmus Kleis Nielsen of Oxford University. Sure, measures like these could protect data from outsiders, but it would also give a private company sole proprietorship over the most comprehensive database of human behaviors and tendencies ever created. Facebook would have even more sway over our local and national elections than it already does, and would gain a monopoly over 2 billion people's personal data. Essentially, Facebook could name its price. Because of the way the Internet works, there's no way to effectively protect our Facebook data without severely compromising our freedom. And even if we were to let Zuckerberg shut everyone out of Facebook's data vaults, this doesn't prevent other websites or services from collecting information on us. It doesn't make us any safer. Our sensitive information is freely available to anyone who knows how to access it.
As technology improves, it's going to become more and more difficult to tell what is and isn't fake news–whether or not that article you just read was an advertisement for Tide or some political campaign you weren't aware of. For better or worse, we've set out to map the entire spectrum of human behaviors. Eventually, marketing campaigns will be so advanced, so accurate in their mapping of our desires, we may forget that we ever had the capacity to think. Somewhere, the ghost of B.F. Skinner smiling.
Has school data collection gone too far?
In today's educational climate, the marker of a school's success is determined by the success of its students, both during their time in school and beyond. While in the past, the idea that schooling should be catered to each individual pupil would have seemed ludicrous, many American schools today, both public and private, collect data on their students with goal of providing just that. By extensively monitoring data collected on their students, teachers and school administrators can see exactly where each individual student excels, as well as where students need work. Though it's not always the case, the use of data and the creation of learner profiles lends itself to the practice of academic tracking.
Academic tracking is the process of separating the highest achieving students and creating a tier system for classes based on students' aptitude in each subject. If classes in your high school were split up into honors, college prep, and general education segments, you grew up learning in this environment. Tracking itself is a controversial subject, with many calling it out as de facto segregation and saying that it negatively affects black and latino students. Whether or not this is true, is the subject of much debate. That said, tracking does disproportionately benefit the children who are high academic achievers, as resources are often diverted to AP and honors courses rather than their gen-ed counterparts.
The inclusion of data to help this tracking system operate, can be viewed either positively or negatively. It depends on your level of optimism. On the one hand, the use of data and individualized teaching practices could lead to the dissolution of tracking altogether, since it would be much easier to help struggling students reach their academic potential. On the other hand is... well, reality. Unfortunately, when theory turns to practice, students aren't all at the same level. They aren't all blank slates that can learn at the same rate. The problem presented by data-collection, particularly if it's coupled with an academic tracking system, is rigidity. With the use of learner profiles, it's possible to breakdown precisely, to the percentage point, what constitutes an honors student. How does this work for courses like English which are largely based on subjective essay grades? On top of this, data doesn't do a particularly good job of showing effort or desire to learn, both of which are integral to an honors environment. Too strong an emphasis on test scores and learner profiles could potentially take away from the human aspects of the teacher/student relationship.
Another prevalent issue regarding data collection is its permanence, as well as the legal precedence set by allowing schools to maintain databases on their students. Many parents are uncomfortable with the idea that their children's school might be keeping a personal file on them. From 2012 to 2014, there was actually a grassroots movement dedicated to fighting against project called InBloom, which aimed to profit from the release of student data. The idea was that no one other than students and educators should be allowed to access those records and that InBloom's mission was directly violating students' rights to privacy. Data shared within a school system can be dangerous because of its ability to shape teachers' opinions about students before they meet. If that data were given to the outside world, say to potential employers, it could be devastating for students trying to get jobs out of highschool. Not to mention the field day that advertisers and marketers would have if they were given access to students' personal data.
The question then remains; if there's a constant threat of dissemination and the advantages to data collection-while promising- aren't yet solidified, why do it? Even with hundreds of companies pledging to protect student privacy, the risk involved seems to significantly outweigh the reward. Many advocates of data collection argue that skeptics are allowing their fear to get the better of them, to the detriment of our public schools. But doesn't it make sense to be skeptical of a scenario in which educators can afford to collect data on students but school systems can't afford books and pencils? Data collection remains an interesting proposition, specifically with regard to personalized education, but until specific legislation is drawn up to combat potential abuse, it seems a bit too risky. It's not necessarily a luddite position to argue for the ability to measure student progress as an essential part of teaching. At the end of the day educators, not a collection of data points, are responsible for whether or not students succeed.
Matt Clibanoff is a writer and editor based in New York City who covers music, politics, sports and pop culture. His editorial work can be found in Inked Magazine, Pop Dust, The Liberty Project, and All Things Go. His fiction has been published in Forth Magazine. -- Find Matt at his website and on Twitter: @mattclibanoff