Dall-E Mini, the AI-powered text-to-image generator has taken over the internet. With its ability to render nearly anything your meme-loving heart desires, anyone can make their dreams come true.
DALL-E 2, a portmanteau of Salvador Dali, the surrealist and Wall-E, the Pixar robot, was created by OpenAI and is not widely available; it creates far cleaner imagery and was recently used to launch Cosmpolitan’s first AI-generated cover. The art world has been one of the first industries to truly embrace AI.
The open-sourced miniature version is what’s responsible for the memes. Programmer Boris Dayma wants to make AI more accessible; he built the Dall-E Mini program as part of a competition held by Google and an AI community called Hugging Face.
And with great technology, comes great memes. Typing a short phrase into Dall-E Mini will manifest 9 different amalgamations, theoretically shaping into reality the strange images you’ve conjured. Its popularity leads to too much traffic, often resulting in an error that can be fixed by refreshing the page or trying again later.
If you want to be a part of the creation of AI-powered engines, it all starts with code. CodeAcademy explains that Dall-E Mini is a seq2seq model, “typically used in natural language processing (NLP) for things like translation and conversational modeling.” CodeAcademy’s Text Generation course will teach you how to utilize seq2seq, but they also offer opportunities to learn 14+ coding languages at your own pace.
You can choose the Machine Learning Specialist career path if you want to become a Data Scientist who develops these types of programs, but you can also choose courses by language, subject (what is cybersecurity?) or even skill - build a website with HTML, CSS, and more.
CodeAcademy offers many classes for free as well as a free trial; it’s an invaluable resource for giving people of all experience levels the fundamentals they need to build the world they want to see.
As for Dall-E Mini, while some have opted to create beauty, most have opted for memes. Here are some of the internet’s favorites:
— Weird Dall-E Mini Generations (@weirddalle) June 8, 2022
— Weird Dall-E Mini Generations (@weirddalle) June 12, 2022
no fuck every other dall-e image ive made this one is the best yet pic.twitter.com/iuFNm4UTUM
— bri (@takoyamas) June 10, 2022
— Weird Dall-E Mini Generations (@weirddalle) June 12, 2022
— Chairman George (@superbunnyhop) June 9, 2022
back at it again at the DALL•E mini pic.twitter.com/iPGsaMThBC
— beca. ⚢ (@dorysief) June 9, 2022
There’s no looking back now, not once you’ve seen Pugachu; artificial intelligence is here to stay.
Big Data and Our Elections
Sites like Facebook will have more and more influence over our elections in the future.
America's favorite uncorroborated news story of the moment is that the Russian government masterminded Trump's rise to power. It's easy to understand why. Introspection after a loss is difficult, and rather than face themselves, the DNC decided to have a seance, evoking a Cold War ghost to explain their defeat. It's somewhat comforting to assume an international conspiracy was behind the Hillary Clinton's failure in the 2016 election. It absolves the DNC of any responsibility to change their conduct or adjust their political strategy. That said, there is no hard evidence of collusion, but rather a string of awkward encounters by Trump's largely inexperienced, and frankly stupid, staff. The meat of Russia's "interference" came in the form of social media bots, fake accounts that would automatically repost sensationalist headlines to drum up support for Trump. These accounts are pretty easy to spot however, as they don't even come close to passing a turing test.
Blaming Russia is too easy
Still, the creation of Russia's bot army had to be predicated on some form of information, and many have accused Putin's government of tracking users' Facebook data in an attempt to gain a psychological understanding of the average American voter. This is where Aleksandr Kogan comes into play. Kogan sold the data of some 87 million Facebook users (collected via a quiz app) to Cambridge Analytica, a political consulting firm hired by the Trump campaign. Cambridge Analytica's goal was to create psychographic voting profiles. While there's no definitive connection between Cambridge Analytica and Russia, the precedent set by CA and their illegal exploitation of Facebook is a frightening one. If a private company is collecting data on citizens, it's a pretty safe bet that governments around the world are doing the same. While the Democratic Party's Russophobia is definitely a reaction to losing in 2016 more than anything else, but it accidentally shed light on an important issue: our data isn't safe, and with recent improvements AI and voice recognition software, we'll soon have the technology to not only create comprehensive individual psych profiles, but to tailor campaigns to individual voters.Obviously companies like Google and Facebook have large stores of internal data, and they've certainly been amenable to selling it, but academic researchers (like Kogan) also have large data caches. Behavioral psychologists use Facebook in studies all the time, and the academic world isn't particularly well-known for its cyber security. Even in the event that these databases aren't hacked, there's nothing to prevent a researcher from selling their findings after their study is complete. The quick fix is to let Facebook block third parties from collecting data on its users, and for its part, Facebook has done just that. They've begun blocking apps from collecting information, and have also limited the number of researchers allowed to look at data on the site. Only academics researching political elections through the lens of social media are permitted to apply for access to Facebook's database.
At a glance, these robust safety measures are a breath of fresh air. It isn't often that a tech company is so committed to its customers' privacy. That said, when things look too good to be true, they usually are. If Facebook continues its path to prohibition, "only Facebook will really know very much about how Facebook actually operates and how people act on Facebook," warns Dr. Rasmus Kleis Nielsen of Oxford University. Sure, measures like these could protect data from outsiders, but it would also give a private company sole proprietorship over the most comprehensive database of human behaviors and tendencies ever created. Facebook would have even more sway over our local and national elections than it already does, and would gain a monopoly over 2 billion people's personal data. Essentially, Facebook could name its price. Because of the way the Internet works, there's no way to effectively protect our Facebook data without severely compromising our freedom. And even if we were to let Zuckerberg shut everyone out of Facebook's data vaults, this doesn't prevent other websites or services from collecting information on us. It doesn't make us any safer. Our sensitive information is freely available to anyone who knows how to access it.
As technology improves, it's going to become more and more difficult to tell what is and isn't fake news–whether or not that article you just read was an advertisement for Tide or some political campaign you weren't aware of. For better or worse, we've set out to map the entire spectrum of human behaviors. Eventually, marketing campaigns will be so advanced, so accurate in their mapping of our desires, we may forget that we ever had the capacity to think. Somewhere, the ghost of B.F. Skinner smiling.