Dall-E Mini, the AI-powered text-to-image generator has taken over the internet. With its ability to render nearly anything your meme-loving heart desires, anyone can make their dreams come true.
DALL-E 2, a portmanteau of Salvador Dali, the surrealist and Wall-E, the Pixar robot, was created by OpenAI and is not widely available; it creates far cleaner imagery and was recently used to launch Cosmpolitan’s first AI-generated cover. The art world has been one of the first industries to truly embrace AI.
The open-sourced miniature version is what’s responsible for the memes. Programmer Boris Dayma wants to make AI more accessible; he built the Dall-E Mini program as part of a competition held by Google and an AI community called Hugging Face.
And with great technology, comes great memes. Typing a short phrase into Dall-E Mini will manifest 9 different amalgamations, theoretically shaping into reality the strange images you’ve conjured. Its popularity leads to too much traffic, often resulting in an error that can be fixed by refreshing the page or trying again later.
If you want to be a part of the creation of AI-powered engines, it all starts with code. CodeAcademy explains that Dall-E Mini is a seq2seq model, “typically used in natural language processing (NLP) for things like translation and conversational modeling.” CodeAcademy’s Text Generation course will teach you how to utilize seq2seq, but they also offer opportunities to learn 14+ coding languages at your own pace.
You can choose the Machine Learning Specialist career path if you want to become a Data Scientist who develops these types of programs, but you can also choose courses by language, subject (what is cybersecurity?) or even skill - build a website with HTML, CSS, and more.
CodeAcademy offers many classes for free as well as a free trial; it’s an invaluable resource for giving people of all experience levels the fundamentals they need to build the world they want to see.
As for Dall-E Mini, while some have opted to create beauty, most have opted for memes. Here are some of the internet’s favorites:
no fuck every other dall-e image ive made this one is the best yet pic.twitter.com/iuFNm4UTUM
— bri (@takoyamas) June 10, 2022
There’s no looking back now, not once you’ve seen Pugachu; artificial intelligence is here to stay.
The company claims over 600 law enforcement agencies use their app, but in the wrong hands, it could pose extreme dangers. Here's an explainer.
Imagine you're at a bar and you see a person you find attractive.
You sneakily take a photo of them, and use that photo in an app that pulls up every public photo of that person available online. Links to each photo are also provided, meaning you can find out this person's name, workplace, hometown, friends, and more, without even talking to them. An app called Clearview AI has made the potential for this situation a reality.
Recently, New York Times reporter Kashmir Hill investigated the tiny start-up that's taking revolutionary steps in facial recognition technology. Clearview AI was developed by Hoan Ton-That, a San Francisco techie by way of Australia, who marketed the app as a tool for law enforcement to hunt down their victims. Clearview's database contains over three billion images scraped from millions of websites; the premise is, when you take a photo of a person, you can upload it and see public photos of that person and access links to where those photos are from.
Facial Crowd Recognition Technology Getty Images
Though this sounds like a remarkable tool for law enforcement, Clearview poses severe threats to privacy if placed in the wrong hands. As Hill described in her appearance on Times podcast The Daily this week, someone with malicious intent could theoretically take a photo of a stranger, upload it to Clearview, and uncover personal information like that person's name, where they work, where they live, and who their family members are. In short: the concept is so risky that companies who were able to do the same thing first, like Google, refused to.
Still, Clearview claims that over 600 law enforcement agencies have been using the app, although they've kept their list of customers private. Clearview's investors have cited the app's crime-solving capabilities as a means to back it; Clearview has already helped track down suspects on numerous accounts. But, as Hill's reports found, the app isn't always perfect and might not be fully unbiased; "After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview's systems and for a while showed no matches," Hill wrote. "When asked about this, Mr. Ton-That laughed and called it a 'software bug.'" Later, when Ton-That ran another photo of Hill through the app, it pulled up a decade's worth of photos—many of which Hill didn't even realize were public.
"Our belief is that this is the best use of the technology," Ton-That told Hill. But is Clearview's usefulness in law enforcement worth leaving our privacy behind every time we leave the house?
How the newest viral meme could change the future.
You're probably already familiar with the viral '10 Year Challenge' meme.
You may have even participated. But for the uninitiated, you simply post a picture of yourself 10 years ago next to a current picture of yourself now. It's the sort of fun, simple premise that understandably gains traction online. It's cool seeing how your friends, family, and even strangers have changed over time, so it's no wonder '10 Year Challenge' images have taken over social media sites like Facebook, Instagram, and Twitter.
But it's also important to remember that whenever you use social media, you are a product and any information you post, including your interests and identity, can be utilized by corporations. Considering this reality, author Kate O'Neill considered a distinct possibility: that perhaps the '10 Year Challenge' was not so innocent after all - perhaps it was being used to train facial recognition algorithms.
Me 10 years ago: probably would have played along with the profile picture aging meme going around on Facebook and… https://t.co/68zXx8bzoY— Kate O'Neill (@Kate O'Neill) 1547328326.0
The tweet quickly gained traction, leading Kate O'Neill to pen a piece on the topic for Wired. There, she discusses potential scenarios in which a seemingly harmless meme could be utilized to shape our future in indeterminable ways.
For instance, advanced facial recognition technology could be used to great societal benefit in helping recognize missing people. With the right algorithm at work, we might be able to accurately age up child abduction victims or other missing persons with enough accuracy to make them fully recognizable even decades later.
On the other hand, advanced facial recognition technology could potentially be utilized by insurance companies to deny coverage to people they deem likely of developing certain conditions. Obviously, the technology is nowhere near that point yet, but it's essential to consider the reach of future technology when discussing the way we put information about ourselves online in the present.
That being said, the notion that Facebook is using this meme for the purpose of facial recognition technology has detractors too. Some argue that most of the pictures people are using were already on Facebook in the first place. As such, Facebook wouldn't gain any new information from this meme. Others think the non-serious nature of many online posts would probably ruin a lot of potential data.
Still, the existence of two images, both dated by their subject as being a decade apart, creates a more accurate data sample then two images simply posted ten years apart and assumed to be accurately dated. Moreover, even current facial recognition software is adept at recognizing human faces, so joking content would likely not be as big a hindrance as some might imagine.
All that being said, Facebook denies any direct involvement with the popularity of this meme. A spokesperson told O'Neill: "This is a user-generated meme that went viral on its own. Facebook did not start this trend, and the meme uses photos that already exist on Facebook. Facebook gains nothing from this meme (besides reminding us of the questionable fashion trends of 2009). As a reminder, Facebook users can choose to turn facial recognition on or off at any time."
Not to say that Facebook is being honest here, but chances are that this really is just a harmless meme being pushed by users who find it fun. Even if Facebook could possibly use it for technological advancement, there's probably no conspiracy at play. But the larger conversation about the way we freely supply data about our lives to companies online is still entirely relevant.
Before you post anything online, always consider how that information could be used in the future. Do you really want that out there?