Dall-E Mini, the AI-powered text-to-image generator has taken over the internet. With its ability to render nearly anything your meme-loving heart desires, anyone can make their dreams come true.
DALL-E 2, a portmanteau of Salvador Dali, the surrealist and Wall-E, the Pixar robot, was created by OpenAI and is not widely available; it creates far cleaner imagery and was recently used to launch Cosmpolitan’s first AI-generated cover. The art world has been one of the first industries to truly embrace AI.
The open-sourced miniature version is what’s responsible for the memes. Programmer Boris Dayma wants to make AI more accessible; he built the Dall-E Mini program as part of a competition held by Google and an AI community called Hugging Face.
And with great technology, comes great memes. Typing a short phrase into Dall-E Mini will manifest 9 different amalgamations, theoretically shaping into reality the strange images you’ve conjured. Its popularity leads to too much traffic, often resulting in an error that can be fixed by refreshing the page or trying again later.
If you want to be a part of the creation of AI-powered engines, it all starts with code. CodeAcademy explains that Dall-E Mini is a seq2seq model, “typically used in natural language processing (NLP) for things like translation and conversational modeling.” CodeAcademy’s Text Generation course will teach you how to utilize seq2seq, but they also offer opportunities to learn 14+ coding languages at your own pace.
You can choose the Machine Learning Specialist career path if you want to become a Data Scientist who develops these types of programs, but you can also choose courses by language, subject (what is cybersecurity?) or even skill - build a website with HTML, CSS, and more.
CodeAcademy offers many classes for free as well as a free trial; it’s an invaluable resource for giving people of all experience levels the fundamentals they need to build the world they want to see.
As for Dall-E Mini, while some have opted to create beauty, most have opted for memes. Here are some of the internet’s favorites:
no fuck every other dall-e image ive made this one is the best yet pic.twitter.com/iuFNm4UTUM
— bri (@takoyamas) June 10, 2022
There’s no looking back now, not once you’ve seen Pugachu; artificial intelligence is here to stay.
The company claims over 600 law enforcement agencies use their app, but in the wrong hands, it could pose extreme dangers. Here's an explainer.
Imagine you're at a bar and you see a person you find attractive.
You sneakily take a photo of them, and use that photo in an app that pulls up every public photo of that person available online. Links to each photo are also provided, meaning you can find out this person's name, workplace, hometown, friends, and more, without even talking to them. An app called Clearview AI has made the potential for this situation a reality.
Recently, New York Times reporter Kashmir Hill investigated the tiny start-up that's taking revolutionary steps in facial recognition technology. Clearview AI was developed by Hoan Ton-That, a San Francisco techie by way of Australia, who marketed the app as a tool for law enforcement to hunt down their victims. Clearview's database contains over three billion images scraped from millions of websites; the premise is, when you take a photo of a person, you can upload it and see public photos of that person and access links to where those photos are from.
Facial Crowd Recognition Technology Getty Images
Though this sounds like a remarkable tool for law enforcement, Clearview poses severe threats to privacy if placed in the wrong hands. As Hill described in her appearance on Times podcast The Daily this week, someone with malicious intent could theoretically take a photo of a stranger, upload it to Clearview, and uncover personal information like that person's name, where they work, where they live, and who their family members are. In short: the concept is so risky that companies who were able to do the same thing first, like Google, refused to.
Still, Clearview claims that over 600 law enforcement agencies have been using the app, although they've kept their list of customers private. Clearview's investors have cited the app's crime-solving capabilities as a means to back it; Clearview has already helped track down suspects on numerous accounts. But, as Hill's reports found, the app isn't always perfect and might not be fully unbiased; "After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview's systems and for a while showed no matches," Hill wrote. "When asked about this, Mr. Ton-That laughed and called it a 'software bug.'" Later, when Ton-That ran another photo of Hill through the app, it pulled up a decade's worth of photos—many of which Hill didn't even realize were public.
"Our belief is that this is the best use of the technology," Ton-That told Hill. But is Clearview's usefulness in law enforcement worth leaving our privacy behind every time we leave the house?
By submitting your genetic material to a company, you're tacitly agreeing to share your identity and rights to your most private information.
From "fear of missing out" on social media to belligerent political differences, modern existence is increasingly alienating. As a result, more people are interested in "finding their tribe" by digging up their family origins. But genetics-testing companies like Ancestry and 23andMe take more than your DNA, they take your privacy to that information, as well. With the Golden State Killer finally arrested thanks to data mined from those genetic databases, law enforcement has proven their ability to access the company's records.
In the same vein, the government can gain access to personal information given to these sites for purposes they deem justified. For example, in 2019, Canadian immigration officials obtained DNA results from sites like Familytreedna.com and Ancestry.com in order to identify immigrants' nationality and trace their relatives. Subodh Bharati, a lawyer representing one targeted individual, told Vice, "I think it is a matter of public interest that border service agencies like the CBSA are able to obtain access to DNA results...There are clear privacy concerns. How is the CBSA able to access this information and what measures are being put in place to ensure this information remains confidential?"
While each site in question denies working with government agencies, if authorities argue that national security is at risk, then the websites "can't really say no," as immigration lawyer Jared Will explains. He condemns the exchange as "extorted consent." Bharati warns potential customers, "Individuals using these sites to look at their family tree should be aware that their confidential information is being made available to the government and that border agents may contact them to help facilitate the deportation of migrants."
Furthermore, accessing your data doesn't always take government measures. For instance, according to 23andMe's policy, "We do not share customer data with any public databases. We will not provide any person's data (genetic or non-genetic) to an insurance company or employer. We will not provide information to law enforcement or regulatory authorities unless required by law to comply with a valid court order, subpoena, or search warrant for genetic or Personal Information." Yet, there's an additional permission users are asked to agree to, reading, "By agreeing to the Research Consent Document, Individual Data Sharing Consent Document, or participating in a 23andMe Research Community, you can give consent for the use of your data for scientific research purposes."
In July 2018, 23andMe announced it was partnering with the world's ninth-largest pharmaceutical company, GlaxoSmithKline (GSK). The agreement grants GSK exclusive access to the genetic information of over 5 million users, and 23andMe received $300 million. GSK released a statement explaining their interest in genetic databases, saying, "The goal of the collaboration is to gather insights and discover novel drug targets driving disease progression and develop therapies."
While it's a universal good to create more effective and closely targeted medicine, the transactional exchange of people's most private information, their DNA, unsettles many. Peter Pitts, president of the Center of Medicine in the Public Interest, told NBC, "Are they going to offer rebates to people who opt in, so their customers aren't paying for the privilege of 23andMe working with a for-profit company in a for-profit research project?" In essence, people are paying the site to make money off their information, with no recompense.
Additionally, despite what's written in the company's policy, "the problem with a lot of these privacy policies and Terms of Service is that no one really reads them," says Tiffany C. Li, a privacy expert and resident fellow at Yale Law School's Information Society Project. While users can opt to close their 23andMe accounts or retract their permission once it's given, the company emphasizes, "Any research involving your data that has already been performed or published prior to our receipt of your request will not be reversed, undone, or withdrawn."
Lastly, there's the possibility of information leaks. In June 2016, the DNA testing service MyHeritage announced that its database of 92 million accounts had been hacked. The depth of the breach only revealed encrypted emails and passwords, but the company was targeted because the premium on genetic data is far more valuable than credit card or bank information. Hackers could hold DNA data for ransom, according to Giovanni Vigna, co-founder of the cybersecurity company Lastline. He says, "This data could be sold on the down-low or monetized to insurance companies. You can imagine the consequences: One day, I might apply for a long-term loan and get rejected because deep in the corporate system, there is data that I am very likely to get Alzheimer's and die before I would repay the loan."
Ultimately, by submitting your genetic material to a company, you tacitly agree to share your identity and rights to your most private information. As Natalie Ram, a law professor in bioethics, says, "If there is data that exists, there is a way for it to be exploited.
Do the benefits of knowing a child’s location outweigh the risks of giving that information to hackers?
For busy, working parents, parents of children who take public transportation to school, parents of children with special needs and parents who simply want to know where their children are in case of emergencies, more and more GPS devices promise to track a child's location and broadcast it to the parents' phones. These watches, wristbands and phone-sized devices are immediately attractive to a worried parent. Many offer features beyond tracking, including communication, distress signals, augmented reality, water sensing and more. What parent doesn't want to better protect the children by keeping them away from dangerous places and situations?
But any electronic device is susceptible to hackers and a GPS-enabled communication device attached to a child is dangerous in the wrong hands. How can a parent weigh the benefits of knowing their child's location with the risks of exposing that location to hackers?
It starts with considering the situation: is a GPS tracker really the solution to a concerned parent's worries? Of course, there are unquestionably situations that call for better surveillance of a child's location, like parents who work and children who travel to school by themselves. Communication and awareness are essential to a child's safety. If used responsibly, this monitoring-from-a-distance could even give a child of a certain age more freedom without sacrificing protection.
It is already becoming common for pre-teens to have their own smartphones. A parent can use the phone's built-in features to track the child's location. Cell carriers also offer tracking features, such as AT&T's FamilyMap and Verizon's Family Locator.
But for a younger child without a GPS-enabled phone, a GPS tracker designed for kids might be a quick way to better peace of mind. A parent who's shopping for these trackers (or who's already using one) needs to understand the risks, where they come from and how to defend against them.
Norwegian researches tested four kids' smartwatches last year and were surprised at the lack of security of the devices. They were able to hack into them relatively easily, collect private information, view the user's location and even send false location info to the parent's phone. One watch's SOS function didn't work. Some of the watches' data was transmitted without encryption.
A serious point of danger in some watches is their communication ability. Watches that allow the parent and child to communicate via voice or text can also allow hackers to communicate with the child, pretending to be someone they know.
Last year, the European Consumer Organization's (BEUC) published a warning against smartwatches designed for children. The German telecom agency, Bundesnetzagentur, banned the watches and asked parents to destroy any they'd already purchased. And the FBI issued a general warning against internet-connected devices and the privacy risks that come with them.
It is important to choose a device from a reliable or expert-endorsed company that focuses on security and privacy.
Verizon sells its GizmoGadget for $150. It displays up to ten contacts for one-touch voice calling or sending short text messages. It's waterproof, comes in different colors and even features mini games and fitness challenges, all while tracking a child's GPS location. AngelSense is a GPS and voice monitoring device designed specifically for children with special needs. It is packed with features beyond GPS tracking, including noise monitoring, voice calling, a timeline view of the child's day, "runner mode" for a wandering child, an alarm, indoor location and more.
The truth is, smartwatches are internet devices that are vulnerable to skilled hackers and that store GPS data that could lead a dangerous person to a child. There are obvious benefits to using a device to track and locate a child at any time. But, at this early point in the devices' development, parents should research carefully and choose security and reliability over features or price.
It's not as scary as you think.
There have been numerous pieces written about the dark web and the dangers it could pose to your personal cyber security. It's also been used in advertisements by Experian, in which they offer "free dark web scans" to help customers find out if their "information is on the dark web." This type of language is deliberately misleading, as is the company's definition of the dark web, which basically describes it as a world full of Internet marauders hunting for your social security number. Ironically, in order to acquire the "free dark web scan," Experian itself asks its customers for their social security numbers.
In a certain light, these ads are hilarious in their deliberate misinterpretation of how the dark web works, but there's definitely something sinister about the way they prey on the wallets of the uninformed. Though it sounds dangerous, the dark web isn't the nightmarish hellscape that cyber security companies would have you believe it is. Before understanding the dark web however, one has to first understand the deep web, and by extension, the Internet as a whole.
The Internet is divided into two subsections: the surface web and the deep web. The difference between the two is simple. The surface web is readily accessible via search engines; the deep web is not. While almost every site you visit is probably part of the surface web, there are certain places on the Internet that are necessarily hidden. For example, research papers, netbanking, and medical records aren't readily accessible to anyone using Google, as the search engine doesn't index these things. Another example, is content that exists behind a paywall, like the New York Times' online newspaper. The dark web can be thought of as a small subsect of the deep web, but while the two are often conflated, they aren't the same at all. It's helpful to think of the Internet as an iceberg, with most of it existing beneath the surface. The surface web encompasses about 4% of the entire Internet while the deep web and dark web, represent 90% and 6% respectively.
Unlike the deep web, the dark web is only accessible via special networks, the most popular of which being Tor. Browsers like Tor render your computer invisible while you browse, using complex encryptions to mask your computer's IP number while you browse, allowing for a truly private Internet experience. Confidentiality is at the heart of Tor's mission, and its developers goal was to create an Internet free of surveillance and tracking. Unfortunately, when they are guaranteed anonymity, many Internet users get into some pretty unsavory things.
The first time the dark web was in the news, was when the online black market the Silk Road became a major player in 2011. Until the FBI arrested Silk Road founder Ross Ulbricht in 2013, the site was a forum dealing in illegal weapons, drugs, and child pornography, and the transactions were made via Bitcoin rather than actual cash. Bitcoin itself actually came to prominence in these illicit markets, though it's slowly falling out of favor with online black markets due to the wild fluctuations in its price over the past few years. Outside of the Silk Road and its successors, there has also been tons of publicity surrounding the hiring of hit men via the dark web, though most of these services have turned out to be scams. The most famous scam was run by a company called Besa Mafia, who would take cash from buyers, and then instead of killing the person they were hired to kill, they would report the buyers to the police and get them arrested.
It's not quite this sinister
Realistically though, the dark web isn't nearly as scary as it's made out to be. Yes, there are hackers and illegal activity, but at the core of Tor's project, is privacy. If a hacker wanted to steal someone's social security number or if a pedophile wanted to seek out illicit porn, they wouldn't need to use the dark web to do it. In fact, the dark web only accounts for about .2% of the child porn being shared online. While it's fair to assume that most sites on the dark web are used for criminal activity, it's worth mentioning that the FBI can pretty easily arrest and track folks using the dark web. They've even contracted one of Tor's developers to help them track down cyber criminals.
The dark web's reach with regard to criminal activities has been largely exaggerated by the mainstream media, and there's no real reason to fear it. If you're someone who strongly values the privacy of your browsing habits, for whatever reason, the dark web provides a different type of Internet, one that's far more secure than your standard browser. If you don't care about your Internet privacy, that's fine too. Dark web hackers aren't going to hunt you down and steal all your information in the night, and you're no less safe on the Internet just because Tor browsers exist. There's a strange tendency in this country to conflate others' privacy and anonymity with a lack of personal security. Cyber security firms have a vested interest in keeping you scared and in the dark about how the Internet works. Don't put too much stock into it. The dark web as we know it has existed since 2002, and we're no worse for wear.
Why the only amendment never brought before the supreme court may be more important than you think
You'd be hard pressed to find someone living in the U.S.A. (and, perhaps in Russia) who could not tell you that the Second Amendment involved the right to bear arms. And, most people understand that something in the Bill of Rights protects them against unlawful search and seizure, even if they don't know that it's the Fourth Amendment that does so. But sandwiched in between these two celebrity amendments is the all-but-forgotten Third Amendment. Since its inclusion in the Bill of Rights (the first 10 amendments to the constitution), the Third Amendment has been the subject of a small handful of cases, and not one of them has gone before the Supreme Court. Here it is:
No Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law.
Called the "runt piglet" of the Constitution by the American Bar Association, the Third Amendment would seem on the face of it to have little place in our lives. Does anyone think the government is going to try to use our homes as barracks? The idea is almost laughable. At the same time, this anachronistic addition to our Constitution is fundamentally concerned with the same issue as its better known siblings, namely protecting citizens from excessive government authority, and the elemental conflict between the rights of the individual versus the rights of the federal government. As such, the Third Amendment actually does have some relevance today, and could have even more in the future.
Militarized police force
Written by James Madison in response to calls from several states for greater constitutional protection for individual liberties, the Bill of Rights lists specific prohibitions on governmental power. Its purpose was not to grant rights but to protect rights the framers saw as fundamental and to place specific limits on government power. Third Amendment centers around the individual's right to privacy in their homes, and underscores that citizens have the right not to have the government intrude in that sacred space, even in times of war. When the amendment was written in the eighteenth century, quartering troops in private homes would have been top of mind for Americans and Englishmen. In fact, one of the many accusations Congress leveled against the king in The Declaration of Independence were his "quartering large Bodies of Armed Troops among us." One issue then, as now, is a balance between the rights of individual citizens and the needs of the military. For example, what if the military claims they need to occupy a home in order to surveil a suspected terrorist cell next door? Beyond that, what actually constitutes "military?" Civil liberties activates warn that our nation's police forces have increasingly taken on a military role, and that the increased use of police in this capacity is bound to create conflicts.
Back in 2013, a family in Nevada claimed that police had occupied their home to gain a tactical advantage against a suspect in a near-by house, there-by violating that families' Third Amendment rights. The case was dismissed in federal court because, among other findings, Judge Andrew Gordon ruled that a municipal police officer is not a soldier. Judge Gordon also followed a 1982 decision that the Amendment does not relate to state governments. But, as the lines between the police and the military are increasingly blurred, if not obliterated, we might expect to see more of these Third Amendment cases being brought before the courts. As Ilya Somin of the Washington Post pointed out in 2015, "The difficult issues raised by the militarization of police forces suggest that it may be time to stop treating the Third Amendment as just a punchline for clever legal humor."
A surveillance society
The Third Amendment is the only part of the Bill of Rights, and the Constitution as a whole, that actually addresses the relationship between citizens' rights and the military. Scholars have pointed out that it actually underscores civilian control over the military. That power dynamic would be important in any era, but takes on another layer of significance today when what passes for, and acts in the capacity of, the military is very different from what it was in 1791. We live in a world where people leave a digital trail of data wherever they go, and where we rely on the use of independent contractors, satellite surveillance and drones for our national defense, and let's not forget about AI. In a not-too-distant future when our military may be more machine than human, what could having "soldiers" in our "homes" mean?
A 2015 article "Could the Third Amendment be used to fight the surveillance state?" quoted law professor Steven Friedland, who had an idea.
"The Third Amendment no longer will be the forgotten amendment if it is considered to interlock with the Fourth Amendment to provide a check on some domestic mass surveillance intruding on civil life, particularly within the home, business or curtilage of each. In the digital era, the dual purposes of the Amendment should be understood to potentially limit the reach of cyber soldiers and protect the enjoyment of a private tenancy without governmental incursion."
Home is where the heart is
While the US Constitution itself does not contain an express right to privacy, the Bill of Rights reflects the Framers' concerns for protecting specific aspects of it, namely; the right to privacy of beliefs in the 1st Amendment, the right to privacy for person and possessions against unreasonable search and seizure in the 4th amendment, and the right to privacy in the home, the Third Amendment.
The right to a private space we call home is not just an American right. It is unquestionably a fundamental human right. The Third Amendment is largely forgotten in today's world of bots, drones, data, and virtual reality, but that "runt piglet" may end up being the very thing we need to call upon to protect it.
We're at the dawn of a second search engine war.
In the early days of the Internet, Google wasn't the biggest fish in the pond. They weren't worth billions. They didn't have a 78% market share in the US. In fact, at the turn of the century, their competitors were numerous and wide-ranging, both in their approach to searching the web, and in their overall style. When the first search engine war began in 2000, it was fought between so many belligerents that it could more accurately be described as a battle royale. Tons of companies, most of which have since lost their claims to legitimacy, were chasing the de facto monopoly Google has today. One by one though, they fell off, mutating, getting bought out, and merging along the way. Ask Jeeves, MSN, Excite, and even Google's top competitor Yahoo, couldn't keep up. Google has reigned supreme for the past decade. Now, almost thirty years after the invention of the first search engine, it looks as though another war is on the horizon.
The cellophane packaging the Internet arrived in has long since been removed and discarded. Nowadays, everyone–from grandparents to toddlers–is online, the novelty has worn off, and people are beginning to pay attention. With the recent news of Facebook and Cambridge Analytica, it's no longer a secret that tech companies make their money by collecting and selling data. While this practice isn't technically illegal, it certainly rubs people the wrong way, and Google is one of the biggest offenders. From tracking cell phones and search histories, to creating advertisement profiles based on its users, Google has rapidly become the poster-child for the ugly and invasive side of the Internet. Sensing Google's weakness–though whether or not one can call this PR hiccup a weakness is debatable–smaller search engines are crawling out of the woodwork and trying to take a piece of Google's pie by advocating for privacy online.
Should data privacy be the primary deciding factor in which search engine you chose?
Companies like DuckDuckGo and StartPage are attempting to live up to their mission statements, aiming to set a "new standard of trust online" by promising not to profit off of users' personal data. And they've had some pretty huge success so far, shaving close to 10% off of Google's total market share in the past year alone. DuckDuckGo, perhaps the biggest of the private search engines, reportedly averages about 16 million queries per day and has shown steady growth every year since its inception in 2011. In post-Snowden America, Internet privacy is more important than it's ever been, and, barring a massive shift in public opinion, these search engines can only be expected to continue growing.
Even considering DuckDuckGo's meteoric rise, the rest of the Search Engine' War may be a civil one, as challengers certainly aren't presenting a unified front against Google's tech empire. Between DuckDuckGo, StartPage, Wolfram Alpha, Yippy, and the rest, the relatively niche market is saturated with competitors and is starting to look a bit like the original search engine war in the early 2000s. Google on the other hand, is an entrenched power. Averaging 3.5 billion search queries per day and valued at over 500 billion dollars, Google is almost unchallengeable. Google also doesn't have to rely solely on its search engine for income, considering the amount of software and hardware they produce. On top of this, DuckDuckGo's foundational promise doesn't help them make money, considering how valuable a person's internet data is.
DuckDuckGo and Google face off againCurrently, websites that support online privacy simply are not well positioned to overtake Google in Search War II, especially considering that Google owns not only the most popular search engine, but the most popular browser as well. And despite the public's grumbling, congress decided to strip some of our commonsense privacy laws last year, electing to allow Internet services providers (ISPs) to sell users' data to third parties without their consent. While this repeal doesn't directly relate to the search engine battle, it sets an important precedent about Internet privacy; the likelihood of stopping data collection anytime soon is nothing more than a pipe dream. That said, it is important that we commend companies like DuckDuckGo for their groundbreaking business model. These websites are still for-profit corporations, but inasmuch as market trends can be used to indicate our moral valence as a country, it would seem that things are looking a little brighter regarding Internet privacy.