id
int64 0
17.2k
| year
int64 2k
2.02k
| title
stringlengths 7
208
| url
stringlengths 20
263
| text
stringlengths 852
324k
|
---|---|---|---|---|
200 | 2,021 | "Lakes Are Losing Oxygen—and Their Inhabitants Are in Danger | WIRED" | "https://www.wired.com/story/lakes-are-losing-oxygen-and-their-inhabitants-are-in-danger" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Jennifer Clare Ball Science Lakes Are Losing Oxygen—and Their Inhabitants Are in Danger Photograph: Artur Widak/Getty Images Save this story Save Save this story Save Kevin Rose and his team loaded their sensors into a boat and began rowing. It was late summer at Lake Giles, a small glacial lake in northeast Pennsylvania, and they were there to study the effects of acid rain. But in the process, they discovered something else. Though the lake seemed full of life, the water had been changing. It was taking on a brownish hue, and its surface was warming. Most of all, the lake was running low on dissolved oxygen, a key indicator of its health. As they lowered a sensor into the water, the reading presented another abysmal zero.
This is a condition researchers call “anoxia,” and it’s a big problem. It can harm cold-water fish species and contribute to algae blooms that do even more damage to the lake. As Rose and his team rowed back to shore, they wondered whether their experience at Lake Giles was an anomaly. Now, 15 years later, they know it’s not. Thanks to the help of more than 40 collaborators who collected and analyzed data from a broad array of sources, Rose and his team published a study earlier this month in Nature showing the widespread deoxygenation of lakes around the world.
Together, they compiled data on dissolved oxygen concentrations in more than 300 lakes in the temperate zone, or places with moderate climates and four seasons. The researchers found that the oxygen decline in freshwater was happening at a rate up to 9.3 times greater than in oceans, and that climate change and a lack of water clarity had changed the physical and chemical makeup of those lakes too. That matters, because not only do we get much of our drinking water from lakes and use them for recreational activities, but they support an extensive variety of species. “These substantial declines in oxygen potentially threaten biodiversity, especially the more oxygen-sensitive species,” says Rose.
The team looked for sites with at least 15 years of data in the United States, Canada, and Europe. The earliest sampling dated back to 1941 at a lake in Sweden, but most started around the 1980s, when this kind of monitoring became more common. They used academic, nonprofit, and public data, like statistics from the Environmental Protection Agency’s online Water Quality Portal. “The real power is in a lot of government data sets,” Rose says.
In their analysis, the team found that although surface temperatures have been rising, deep lake waters have remained cool, but increasingly lost their oxygen due to a phenomenon called stratification. If you have ever walked into a lake from the shore and found that the waters are substantially colder the further out you go, you have experienced it. Colder water is denser, so, like oil separating from water, it remains deep in the lake, while the surface water maintains its warmth.
But as lakes’ surfaces have gotten warmer, the difference between the temperatures of their warm and cool parts has grown wider. So has the difference in their densities. That means more stratification. Once those two layers stop mixing well, oxygen from the surface is no longer being pulled into the deeper waters. Hotter temperatures also make the oxygen less soluble and less likely to be absorbed into the water.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Stratification is normally affected by season; it increases as the air warms. But climate change is hurrying that up. “That spring season has been moving earlier and earlier, which means stratification, that difference in density, is moving earlier and earlier too,” Rose says. Because it starts sooner, stratification lasts longer throughout the year, leaving the lake and its inhabitants with lower oxygen for prolonged periods of time.
Rose identified a second problem too: Deep water is becoming less clear because of a host of factors including erosion, algal growth, and fertilizer runoff from nearby agricultural fields and residential developments. Murkier waters make plants less likely to survive, which means less photosynthesis and less oxygen down below. And that, of course, is bad news for the lakes’ creatures. “Just like humans, every complex life form on the planet depends on oxygen,” Rose says. “In water, that’s in the dissolved form.” Each species has a unique critical oxygen threshold for survival. Deoxygenation particularly affects cold-water fish like trout, which need 7 milligrams of oxygen per liter of water, and salmon, which need 6 milligrams per liter. (Warm-water species, like bass and carp, both need 5 milligrams per liter.) “Even when you get down to low levels of oxygen concentration requirements, there are demonstrated impacts on performance of individual organisms in the water,” says Peter Raymond, a professor of ecosystem ecology at Yale University, who peer-reviewed the paper. “They don’t perform as well. They become stressed, as you might imagine.” The combination of low oxygen and warmer water is particularly worrisome. For example, if temperatures and oxygen levels are not in the optimal range, it can skew fish’s reproductive timing, affecting the amount they reproduce. Warming waters may also supercharge or deactivate their immune systems, which can compromise the degree to which they can fight pathogens in a climate-altered environment.
Because fish are ectotherms, meaning they regulate their body temperature based on external temperature, their metabolism speeds up in warm waters, which increases the amount of oxygen they need to survive, says James Whitney, a professor of biology at Pittsburg State University in Kansas, who was not affiliated with the study. “If it gets bad enough, they can suffocate, causing fish kills,” Whitney says.
For example, during a 2018 drought in Kansas, Whitney recalls that the water in streams was warmer, and there was less of it due to lack of rain. Fish were gulping oxygen from the surface waters, but there wasn’t enough of it to go around, and some of them died.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Deoxygenation can become a vicious cycle. When lakes go anoxic, they build up sediment on the bottom, which then releases phosphorus, which can trigger algal growth on the surface. Lakes can develop harmful algal blooms , which eat up whatever oxygen is left. Some produce toxins that kill fish, mammals, and birds; in extreme cases, they may also cause human illness and even death.
“It’s not a hypothetical that organisms are going to be impacted. It’s going to happen,” Raymond says.
While there’s no way to directly add oxygen back to lakes, he points out, there are other ways to improve ecosystem health. The biggest change has to happen on the global level: Reducing greenhouse gas emissions will stop lake waters from warming and losing solubility. But local caretaking matters too. “There is a direct climate impact here, but there is a lot that can be done at the local level to maintain high oxygen concentrations,” agrees Rose.
Rose and several other study coauthors contribute to GLEON (the Global Lake Ecological Observatory Network) , a grassroots group of scientists from around the world who are focused on conserving freshwater resources. They share data in order to catch ecosystem changes early, as lakes are among the first to exhibit measurable shifts. Some of their recommendations include using data from one lake to learn about others, and assessing risk based on real-time measurements of local water temperature and dissolved oxygen levels. Planting trees as buffer zones around lakes can prevent erosion, which can increase water clarity and reduce nutrient runoff. This can be coordinated by state agencies that manage water resources or individual lake associations. The Environmental Protection Agency also recommends that residents who live near bodies of water use fertilizer according to the instructions on the label in order to prevent excess nitrogen and phosphorus from entering the lakes, inadvertently fertilizing algae blooms.
“Proactive management is needed—or is going to be needed—in the future in order to even maintain the status quo,” Rose says. And by “the future,” he doesn’t mean decades. He means in the next couple of years. “This is an ongoing issue,” he says.
📩 The latest on tech, science, and more: Get our newsletters ! The battle between the lithium mine and the wildflower No, Covid-19 vaccines won't make you magnetic.
Here's why DuckDuckGo's quest to prove online privacy is possible A new wave of dating apps takes cues from TikTok and Gen Z Your favorite mobile apps that can also run in a web browser 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Topics climate climate change water Biology marine science Ecology environment fish Biodiversity Matt Simon Rob Reddick Matt Reynolds Maryn McKenna Kate Yoder Hannah Ritchie Emily Mullin Jim Robbins Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
201 | 2,022 | "Extreme Heat in the Oceans Is Out of Control | WIRED" | "https://www.wired.com/story/extreme-heat-in-the-oceans-is-out-of-control" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science Extreme Heat in the Oceans Is Out of Control Photograph: Anton Plutov/Getty Images Save this story Save Save this story Save Without the ocean, climate change on land would be even more catastrophic. The seas have absorbed over 90 percent of the excess heat from greenhouse gas emissions, essentially saving humanity from itself. But it’s taking a toll: The ocean, too, is rapidly warming. And just as we have heat waves on land, parts of the ocean can experience temperature spikes too.
New research exposes just how bad the problem has gotten. Researchers from the Monterey Bay Aquarium began their calculation by analyzing surface temperature data from 1870 to 1919, sampled from across the globe. (Yes, ships have been taking the ocean’s temperature for 150 years.) Once they knew the historical high temperatures for each month in different parts of the ocean, they had a baseline for marine temperature extremes before the escalation of climate change. In the 19th century, only 2 percent of the ocean surface experienced such extremes.
Then they compared this data to readings in the same places taken from 1920 to 2019. Their results show that by the year 2014, half of the ocean surface was logging temperatures once considered extreme—exceeding those historical highs. By 2019, that figure was 57 percent. In 150 years, the occurrence of extreme heat had become the new normal.
These spikes are different from the overall rise in water temperature, which is also caused by global warming. For one thing, a particular region can come back down off of a high when winter arrives. And the location of the spikes can vary over time, meaning some places were affected earlier than others. So while half the ocean surface was logging temperature extremes by 2014, the South Atlantic had actually crossed that threshold back in 1998.
“And that is ludicrous,” says ecologist Kyle Van Houtan, president and CEO of the Loggerhead Marinelife Center, who coauthored today’s paper in the journal PLOS Climate describing the findings. (Van Houtan did the research in his previous role as the chief scientist at the aquarium, with marine biologist Kisei Tanaka, now at the National Oceanic and Atmospheric Administration.) “There's some major changes going on right now in the ocean, and we think that this calculation, this index, of marine heat that we built is helping to describe why,” he continues. “I think extreme marine heat is much more of a problem than we thought it was. It's actually common today, which is scary, because historically it was just extreme—it was rare.” “The trends they're seeing are consistent with results from a lot of other papers that conclude that marine heat waves are becoming more frequent, they're warmer, and they're lasting longer,” says Bridget Seegers, an oceanographer at NASA, who wasn’t involved in the work. (She was, though, among the researchers who recently reported that 2021 was the sixth hottest year ever recorded.
) Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Courtesy of Monterey Bay Aquarium Take a look at the map above. The redder the area, the more months that it logged temperatures higher than that historical baseline measured between 1870 and 1919. Or, put another way, what used to be extreme is now normal in those red areas. Notice how in the 1980s the extreme heat was mostly around Antarctica, but by the 2010s it had spread all over the world, particularly around the equator.
“Now, that's really concerning,” says Van Houtan, “because obviously that's the distribution of corals. Coral reefs in the past decade have had dramatic and widespread bleaching events.” This happens when warming waters stress the corals , causing them to release the photosynthetic algae that help them produce energy. Without energy, the corals die , contributing to the collapse of the reef ecosystem.
Courtesy of Monterey Bay Aquarium Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The graphic above shows another way of looking at it by country. The areas that tallied the most months of extreme marine heat are the Maldives in the Indian Ocean, Tanzania on the east coast of Africa, and Micronesia in the Pacific—all of them are along the equator.
You might have noticed a glaring exception to the overall heating trend in the first map. The Pacific Ocean off South America—that big blue blob on the map—resists extreme heat because of the upwelling of cold, deep waters. But, Van Houtan cautions, this shouldn’t be read as an indicator that this area is unaffected by climate change. “This is not a map of warming.
Everywhere is warming,” he says. “This is a map of the occurrence of extreme heat. And so those large blue areas may be warming—and in fact, are warming—but they are areas that historically have had a lot of variability in those systems.” “Different places actually kind of take turns increasing rapidly,” agrees Daniel Rudnick, a physical oceanographer at the Scripps Institution of Oceanography, who wasn’t involved in the new research. For example, Rudnick studies the North Pacific, where he saw a major increase in heat in 2014. “I think that's going to be the story in general, that different regions will kind of take their turns. There is a general trend for the whole Earth to warm, but how it will be happening in any region will be different.” Ocean depth plays a big role in how regions are affected. While the depth of the middle of the Pacific Ocean allows cooler waters to upwell, the shallower areas around tropical islands get no such relief. Island nations are at additional risk because water gets bigger as it gets hotter, a phenomenon known as thermal expansion. “The same mass of water takes up more volume, and so there you go—sea level rise,” says Rudnick.
By Katie M. Palmer and Matt Simon In fact, roughly half of human-caused sea level rise is from runoff from melting glaciers , and the other half is from warmer waters just taking up more space. But more locally, almost all of the regional variability in sea level rise is due to thermal expansion, Rudnick adds. The hotter the coastal water, the more the sea rises. This can happen quickly with extreme heat events, whereas sea level rise from ice melt happens at a, well, more glacial pace.
The ecological consequences of rising heat—both in terms of overall warming and spikes from extreme heat—are both obvious and subtle. Species with the ability to flee, like fish, are moving toward the poles. “Like lobster, for example. We're seeing some dramatic changes in the geographic distribution of that fishery off the northeast US,” says Van Houtan. “There used to be a fishery in New York and New Jersey, and that is essentially nonexistent. And now Maine is thriving, but in 10 years Maine may be on the back side, and it may just be a Canadian fishery moving forward.” Similarly, subsistence fishers in the tropics may lose their livelihoods as entire fish populations move away.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But species that are fixed in place, like sponges and corals, can’t leg it (or fin it) to cooler areas. “The fixed ones are likely going to be seeing absolute thresholds of heat that they can't coexist with, and so you're going to see a decline of those,” says Van Houtan.
Extreme heat poses an additional danger, even for species that are already in the process of gradually migrating to cooler areas. “When you have these abrupt events like marine heat waves, they don't give any time for adaptation,” says Seegers. “So they can result in really high mortality. This happens across ecosystems from coral reefs to kelp forests, and they can cause seabirds to die.” “It can take the system years to recover,” Seegers adds, “because if you have a lot of mortality, it's not going to necessarily go right back to normal.” Further complicating matters, these heat extremes often coincide with low winds. The wind plays an important role in the ocean food chain, because it mixes the water, bringing up nutrients from the depths. Tiny photosynthetic organisms called phytoplankton rely on these nutrients, just the way the plants in your garden rely on fertilizer. These phytoplankton feed animal species called zooplankton, which feed fish, which feed marine mammals and seabirds. Losing the phytoplankton to extreme heat, then, assaults the base of the food web.
And crucially, phytoplankton produce most of the oxygen in our atmosphere. “The reality is that we have two lungs on the planet: One of them's green—the forests—and the other one's blue—the ocean. The ocean supplies more than half of the oxygen that we breathe,” says Van Houtan. “It's no understatement to say that the ocean is the beating heart of our climate system, and the ocean is absolutely critical for sustaining human life on this planet.” The only way to keep that life support system online is by massively cutting greenhouse gas emissions, and fast. The oceans can’t take much more of this heat.
📩 The latest on tech, science, and more: Get our newsletters ! The quest to trap CO 2 in stone—and beat climate change The trouble with Encanto ? It twerks too hard Here's how Apple's iCloud Private Relay works This app gives you a tasty way to fight food waste Simulation tech can help predict the biggest threats 👁️ Explore AI like never before with our new database ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Staff Writer X Topics climate change oceans Ecology climate environment water marine science Biodiversity Arbab Ali Matt Simon Matt Simon Matt Simon Maryn McKenna Matt Simon Matt Simon Jim Robbins Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
202 | 2,022 | "A Caustic Shift Is Coming for the Arctic Ocean | WIRED" | "https://www.wired.com/story/a-caustic-shift-is-coming-for-the-arctic-ocean" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gregory Barber Science A Caustic Shift Is Coming for the Arctic Ocean Photograph: Alexander Semenov/Science Source Save this story Save Save this story Save Imagine, for a moment, that you are standing on a pier by the sea, grasping, somewhat inexplicably, a bowling ball. Suddenly you lose your grip and it tumbles down into the waves below with a decisive plonk. Now imagine that the bowling ball is made of gas— carbon dioxide , to be specific, compressed down into that familiar size and weight. That’s approximately your share, on a rough per capita basis, of the human-caused carbon emissions that are absorbed by the sea every day: Your bowling ball’s worth of extra CO 2 , plus the 8 billion or so from everyone else. Since the Industrial Revolution, the oceans have sucked up 30 percent of that extra gas.
The reason so much CO 2 ends up in the oceans is because that molecule is extremely hydrophilic. It loves to react with water—much more than other atmospheric gasses, like oxygen. The first product of that reaction is a compound called carbonic acid, which soon gives up its hydrogen ion. That’s a recipe for a caustic solution. The more hydrogen ions a solution has, the more acidic it is, which is why as the CO 2 in Earth’s atmosphere has increased, its water has gotten more acidic too. By the end of the century, models predict the oceans will reach a level of acidity that hasn’t been seen in millions of years.
Prior periods of acidification and warming have been linked with mass die-offs of some aquatic species, and caused others to go extinct. Scientists believe this round of acidification is happening much faster.
That change is striking hardest and fastest in the planet’s northernmost waters, where the effects of acidification are already acute, says Nina Bednaršek, a researcher at Slovenia’s National Institute of Biology. She studies pteropods, tiny sea snails that are also known as “sea butterflies” due to their translucent, shimmering shells that look uncannily like wings. But scoop those snails from Arctic waters, and a close look at their exoskeletons reveals a duller reality. In more corrosive water, the once-pristine shells become flaked and pock-marked—a harbinger of an early death. Those critters are “the canary in the coal mine,” as Bednaršek puts it—a critical part of the food chain that supports bigger fish, crabs, and mammals, and a sign of coming distress for more species as the oceans become more caustic.
The icy Arctic waters are a special case for several reasons, says Wei-Jun Cai, an oceanographer at the University of Delaware. One is that the ice is melting. It typically acts as a lid on the water underneath it, preventing the exchange of gasses between the atmosphere and the ocean. When it’s gone, the water sucks up the extra CO 2 in the air above it. Plus, that meltwater dilutes compounds that could neutralize the acid. And then it usually just sits there, failing to mix much with the deeper water below. That results in a pool of water near the surface that’s extra acidic. In a study recently published in the journal Science , Cai’s team looked at data from Arctic seafaring missions between 1994 and 2020 and concluded that acidification was happening at three to four times the rate of other ocean basins. “Acidification would be fast, we knew. But we didn’t know how fast,” Cai says. The culprit, they surmise, is the rapid decrease in the range of summer ice over those years. Between 1979 and 2021, the end-of-summer ice shrank by an average of 13 percent per decade.
It’s tricky, though, to put specific numbers on the acidification rates across the entire Arctic seascape. In some places, the water is shallow and mixes heavily with meltwater and freshwater from the surrounding continents. In other places, it’s deeper and is currently locked in with ice all year. Ideally, researchers want to have a window into everything: data that’s consistent from year to year, covering a wide territory and varied seasons, capturing the sometimes decades-long churn of ocean currents. Short-term timing matters immensely as well, as local conditions can change drastically on a week-to-week basis depending on factors like the activity of phytoplankton, which may briefly bloom in an area during the summer and suddenly suck up some of the extra CO 2.
But it’s tough to get data up there. Scientists studying acidification, like Cai, are peering through a narrow periscope—in his case, relying on summertime voyages across a relatively small portion of the sea, which is still mostly ice-locked.
But there are other ways of deciphering the bigger trends. James Orr, a senior scientist at France’s Atomic Energy Commission, uses global climate models that track trends in ocean salinity, temperature, and the movement of biological forces in the water, such as algae. Then his team can make predictions about where acidification is headed. In a study that recently appeared in Nature , Orr and his coauthors found that those models suggest by the end of this century, the usual seasonal pattern of ocean acidity may be turned on its head. Algae blooms normally reduce acidity during the summer. But as the ice melts and shrinks back weeks weeks earlier than before, instead of offering a reprieve, summertime is poised to become the period of highest acidity all year. For Orr, that was a startling conclusion. “We thought it would be quite boring, that could be up to a month's shift in the pattern,” he says. “But it could be up to six months.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg While ocean acidity alone is bad news for many Arctic organisms, Orr points out that the most severe impacts are likely to come from the confluence of many climate-related factors—especially rising water temperatures.
Seasonal shifts have the potential to make those effects all the more potent, adds Claudine Hauri, an oceanographer at the University of Alaska, Fairbanks, who wasn’t involved in the research. “We have moved on to realizing that ocean acidification doesn’t happen on its own,” she says. “We have warming. We have decreased salinity. We have less oxygen. Now suddenly there are experiments that show organisms that don’t care about acidification alone do care if there are temperature increases too.” At a recent workshop held by the Alaska Ocean Acidification Network, a regional group of experts, an array of results from crab and fish researchers illustrated the wide-ranging effects of changing water. In sum: It’s complicated, because the animals themselves are complicated. A species like the king crab may live for decades and progress through many life stages, each of which is best suited for a particular type of aquatic chemistry. It only takes one developmental disruption—of growth as a larva, or during shell-building or reproduction—to throw off the whole lifecycle. Meanwhile, certain species of fish, like Pacific cod, have seen their ability to swim compromised in more acidic water. Others have lost their hearing.
Some species seem to do just fine.
A key to better understanding the ecological effects of ocean acidity is learning more about where it is happening, and with what intensity. Even with more attention on acidification, and with more of the Arctic open to research boats as the ice melts, the challenges and expenses of crewed research voyages remain. As an alternative, Hauri’s team has been working on an autonomous sub, called the Carbon Seaglider, since 2014. The hot pink vessel, designed to dive 3,000 feet under the surface, is equipped with sensors to pick up CO 2 and methane concentrations. The first research expedition will be launched in February in the Gulf of Alaska, in the Northern Pacific. If all goes well, Hauri imagines a fleet of them sailing further north in the Arctic for years to come.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Staff Writer X Topics climate change oceans Ecology carbon carbon emissions carbon dioxide marine science water Matt Simon Ramin Skibba Matt Simon Charlie Wood Matt Simon Rebecca Boyle Rob Reddick Robin Andrews Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
203 | 2,023 | "Ukraine Suffered More Wiper Malware in 2022 Than Anywhere, Ever | WIRED" | "https://www.wired.com/story/ukraine-russia-wiper-malware" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Andy Greenberg Security Ukraine Suffered More Data-Wiping Malware Last Year Than Anywhere, Ever Photograph: Celestino Arce/Getty Images Save this story Save Save this story Save Amidst the tragic toll of Russia's brutal and catastrophic invasion of Ukraine, the effects of the Kremlin's long-running campaign of destructive cyberattacks against its neighbor have often—rightfully—been treated as an afterthought. But after a year of war, it's becoming clear that the cyberwar Ukraine has endured for the past year represents, by some measures, the most active digital conflict in history. Nowhere on the planet has ever been targeted with more specimens of data-destroying code in a single year.
Ahead of the one-year anniversary of Russia's invasion, cybersecurity researchers at Slovakian cybersecurity firm ESET, as well as Fortinet and Google-owned incident-response firm Mandiant have all independently found that in 2022, Ukraine saw far more specimens of “wiper” malware than in any previous year of Russia's long-running cyberwar targeting Ukraine—or, for that matter, any other year, anywhere. That doesn't necessarily mean Ukraine has been harder hit by Russian cyberattacks than in past years; in 2017 Russia's military intelligence hackers known as Sandworm released the massively destructive NotPetya worm.
But the growing volume of destructive code hints at a new kind of cyberwar that has accompanied Russia's physical invasion of Ukraine, with a pace and diversity of cyberattacks that's unprecedented.
“In terms of the sheer number of distinct wiper malware samples,” says ESET senior malware researcher Anton Cherepanov, “this is the most intense use of wipers in all computer history.” Researchers say they're seeing Russia's state-sponsored hackers throw an unprecedented variety of data-destroying malware at Ukraine in a kind of Cambrian Explosion of wipers. They've found wiper malware samples there that target not just Windows machines, but Linux devices and even less common operating systems like Solaris and FreeBSD. They've seen specimens written in a broad array of different programming languages, and with different techniques to destroy target machines' code, from corrupting the partition tables used to organize databases to repurposing Microsoft's SDelete command line tool, to overwriting files wholesale with junk data.
In total, Fortinet counted 16 different “families” of wiper malware in Ukraine over the past 12 months, compared to just one or two in previous years, even at the height of Russia's cyberwar prior to its full-scale invasion. “We're not talking about, like, doubling or tripling,” says Derek Manky, the head of Fortinet's threat intelligence team. “It's an explosion, another order of magnitude.” That variety, researchers say, may be a sign of the sheer number of malware developers whom Russia has assigned to target Ukraine, or of Russia's efforts to build new variants that can stay ahead of Ukraine's detection tools, particularly as Ukraine has hardened its cybersecurity defenses.
Fortinet has also found that the growing volume of wiper malware specimens hitting Ukraine may in fact be creating a more global proliferation problem. As those malware samples have shown up on the malware repository VirusTotal or even the open-source code repository Github, Fortinet researchers say its network security tools have detected other hackers reusing those wipers against targets in 25 countries around the world. “Once that payload is developed, anyone can pick it up and use it,” Manky says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Despite that sheer volume of wiper malware, Russia's cyberattacks against Ukraine in 2022 have in some respects seemed relatively ineffective compared to previous years of its conflict there. Russia has launched repeated destructive cyberwarfare campaigns against Ukraine since the country's 2014 revolution, all seemingly designed to weaken Ukraine's resolve to fight, sow chaos, and make Ukraine appear to the international community to be a failed state. From 2014 to 2017, for instance, Russia's GRU military intelligence agency carried out a series of unprecedented cyberattacks: They disrupted and then attempted to spoof results for Ukraine's 2014 presidential election, caused the first-ever blackouts triggered by hackers , and finally unleashed NotPetya , a self-replicating piece of wiper malware that hit Ukraine, destroying hundreds of networks across government agencies, banks, hospitals, and airports before spreading globally to cause a still-unmatched $10 billion in damage.
But since early 2022, Russia's cyberattacks against Ukraine have shifted into a different gear. Instead of masterpieces of malevolent code that required months to create and deploy, as in Russia's earlier attack campaigns, the Kremlin's cyberattacks have accelerated into quick, dirty, relentless, repeated, and relatively simple acts of sabotage.
In fact, Russia appears, to some degree, to have swapped quality for quantity in its wiper code. Most of the dozen-plus wipers launched in Ukraine in 2022 have been relatively crude and straightforward in their data destruction, with none of the complex self-spreading mechanisms seen in older GRU wiper tools like NotPetya, BadRabbit , or Olympic Destroyer.
In some cases, they even show signs of rushed coding jobs. HermeticWiper, one of the first wiping tools that hit Ukraine just ahead of the February 2022 invasion, used a stolen digital certificate to appear legitimate and avoid detection, a sign of sophisticated pre-invasion planning. But HermeticRansom, a variant in the same family of malware designed to appear as ransomware to its victims, included sloppy programming errors, according to ESET. HermeticWizard, an accompanying tool designed to spread HermeticWiper from system to system, was also bizarrely half-baked. It was designed to infect new machines by attempting to log in to them with hardcoded credentials, but it only tried eight usernames and just three passwords: 123, Qaz123, and Qwerty123.
Perhaps the most impactful of all of Russia's wiper malware attacks on Ukraine in 2022 was AcidRain, a piece of data-destroying code that targeted Viasat satellite modems.
That attack knocked out a portion of Ukraine's military communications and even spread to satellite modems outside the country, disrupting the ability to monitor data from thousands of wind turbines in Germany. The customized coding needed to target the form of Linux used on those modems suggests, like the stolen certificate used in HermeticWiper, that the GRU hackers who launched AcidRain had carefully prepared it ahead of Russia's invasion.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But as the war has progressed—and as Russia has increasingly appeared unprepared for the longer-term conflict it mired itself in—its hackers have switched to shorter-term attacks, perhaps in an effort to match the pace of a physical war with constantly changing front lines. By May and June, the GRU had come to increasingly favor the repeated use of the data-destruction tool CaddyWiper, one of its simplest wiper specimens. According to Mandiant, the GRU deployed CaddyWiper five times in those two months and four more times in October, changing its code only enough to avoid detection by antivirus tools.
Even then, however, the explosion of new wiper variants has only continued: ESET, for instance, lists Prestige, NikoWiper, Somnia, RansomBoggs, BidSwipe, ZeroWipe, and SwiftSlicer all as new forms of destructive malware—often posing as ransomware—that have appeared in Ukraine since just October.
But ESET doesn't see that flood of wipers as a kind of intelligent evolution, so much as a kind of brute-force approach. Russia appears to be throwing every possible destructive tool at Ukraine in an effort to stay ahead of its defenders and inflict whatever additional chaos it can in the midst of a grinding physical conflict.
“You can’t say their technical sophistication is increasing or decreasing, but I would say they’re experimenting with all these different approaches,” says Robert Lipovsky, ESET's principal threat intelligence researcher. “They're all in, and they're trying to wreak havoc and cause disruption.” You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics malware security Russia Ukraine Lily Hay Newman Reece Rogers Lily Hay Newman Andy Greenberg Lily Hay Newman Lily Hay Newman Matt Burgess Kate O'Flaherty Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
204 | 2,016 | "Inside the Cunning, Unprecedented Hack of Ukraine's Power Grid | WIRED" | "https://www.wired.com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Kim Zetter Security Inside the Cunning, Unprecedented Hack of Ukraine's Power Grid Jose A. Bernat Bacet/Getty Images Save this story Save Save this story Save It was 3:30 p.m. last December 23, and residents of the Ivano-Frankivsk region of Western Ukraine were preparing to end their workday and head home through the cold winter streets. Inside the Prykarpattyaoblenergo control center, which distributes power to the region's residents, operators too were nearing the end of their shift. But just as one worker was organizing papers at his desk that day, the cursor on his computer suddenly skittered across the screen of its own accord.
He watched as it navigated purposefully toward buttons controlling the circuit breakers at a substation in the region and then clicked on a box to open the breakers and take the substation offline. A dialogue window popped up on screen asking to confirm the action, and the operator stared dumbfounded as the cursor glided to the box and clicked to affirm. Somewhere in a region outside the city he knew that thousands of residents had just lost their lights and heaters.
The operator grabbed his mouse and tried desperately to seize control of the cursor, but it was unresponsive. Then as the cursor moved in the direction of another breaker, the machine suddenly logged him out of the control panel. Although he tried frantically to log back in, the attackers had changed his password preventing him from gaining re-entry. All he could do was stare helplessly at his screen while the ghosts in the machine clicked open one breaker after another, eventually taking about 30 substations offline. The attackers didn't stop there, however. They also struck two other power distribution centers at the same time, nearly doubling the number of substations taken offline and leaving more than 230,000 residents in the dark. And as if that weren't enough, they also disabled backup power supplies to two of the three distribution centers, leaving operators themselves stumbling in the dark.
The hackers who struck the power centers in Ukraine---the first confirmed hack to take down a power grid---weren't opportunists who just happened upon the networks and launched an attack to test their abilities; according to new details from an extensive investigation into the hack, they were skilled and stealthy strategists who carefully planned their assault over many months, first doing reconnaissance to study the networks and siphon operator credentials, then launching a synchronized assault in a well-choreographed dance.
"It was brilliant," says Robert M. Lee, who assisted in the investigation. Lee is a former cyber warfare operations officer for the US Air Force and is co-founder of Dragos Security, a critical infrastructure security company. "In terms of sophistication, most people always [focus on the] malware [that's used in an attack]," he says. "To me what makes sophistication is logistics and planning and operations and ... what's going on during the length of it. And this was highly sophisticated." Ukraine was quick to point the finger at Russia for the assault. Lee shies away from attributing it to any actor but says there are clear delineations between the various phases of the operation that suggest different levels of actors worked on different parts of the assault. This raises the possibility that the attack might have involved collaboration between completely different parties---possibly cybercriminals and nation-state actors.
“This had to be a well-funded, well-trained team. … [B]ut it didn’t have to be a nation-state,” he says. It could have started out with cybercriminals getting initial access to the network, then handing it off to nation-state attackers who did the rest.
The control systems in Ukraine were surprisingly more secure than some in the US.
Regardless, the successful assault holds many lessons for power generation plants and distribution centers here in the US, experts say; the control systems in Ukraine were surprisingly more secure than some in the US, since they were well-segmented from the control center business networks with robust firewalls. But in the end they still weren't secure enough---workers logging remotely into the SCADA network, the Supervisory Control and Data Acquisition network that controlled the grid, weren't required to use two-factor authentication, which allowed the attackers to hijack their credentials and gain crucial access to systems that controlled the breakers.
The power wasn't out long in Ukraine: just one to six hours for all the areas hit. But more than two months after the attack, the control centers are still not fully operational, according to a recent US report.
Ukrainian and US computer security experts involved in the investigation say the attackers overwrote firmware on critical devices at 16 of the substations, leaving them unresponsive to any remote commands from operators. The power is on, but workers still have to control the breakers manually.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg That's actually a better outcome than what might occur in the US, experts say, since many power grid control systems here don't have manual backup functionality, which means that if attackers were to sabotage automated systems here, it could be much harder for workers to restore power.
Multiple agencies in the US helped the Ukrainians in their investigation of the attack, including the FBI and DHS. Among computer security experts who consulted on the wider investigation were Lee and Michael J. Assante, both of whom teach computer security at the SANS Institute in Washington DC and plan to release a report about their analysis today. They say investigators were pleasantly surprised to discover that the Ukrainian power distribution companies had a vast collection of firewall and system logs that helped them reconstruct events---an uncommon bonanza for any corporate network, but an even rarer find for critical infrastructure environments, which seldom have robust logging capabilities.
According to Lee and a Ukrainian security expert who assisted in the investigation, the attacks began last spring with a spear-phishing campaign that targeted IT staff and system administrators working for multiple companies responsible for distributing electricity throughout Ukraine. Ukraine has 24 regions, each divided into between 11 and 27 provinces, with a different power distribution company serving each region. The phishing campaign delivered email to workers at three of the companies with a malicious Word document attached. When workers clicked on the attachment, a popup displayed asking them to enable macros for the document. If they complied, a program called BlackEnergy3---variants of which have infected other systems in Europe and the US---infected their machines and opened a backdoor to the hackers. The method is notable because most intrusions these days exploit a coding mistake or vulnerability in a software program; but in this case the attackers exploited an intentional feature in the Microsoft Word program. Exploiting the macros feature is an old-school method from the 90's that attackers have recently revived in multiple attacks.
The initial intrusion got the attackers only as far as the corporate networks. But they still had to get to the SCADA networks that controlled the grid. The companies had wisely segregated those networks with a firewall, so the attackers were left with two options: either find vulnerabilities that would let them punch through the firewalls or find another way to get in. They chose the latter.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Over many months they conducted extensive reconnaissance, exploring and mapping the networks and getting access to the Windows Domain Controllers, where user accounts for networks are managed. Here they harvested worker credentials, some of them for VPNs the grid workers used to remotely log in to the SCADA network. Once they got into the SCADA networks, they slowly set the stage for their attack.
First they reconfigured the uninterruptible power supply 1 , or UPS, responsible for providing backup power to two of the control centers. It wasn't enough to plunge customers into the dark---when power went out for the wider region they wanted operators to be blind, too. It was an egregious and aggressive move, the sort that could be interpreted as a "giant fuck you" to the power companies, says Lee.
Each company used a different distribution management system for its grid, and during the reconnaissance phase, the attackers studied each of them carefully. Then they wrote malicious firmware to replace the legitimate firmware on serial-to-Ethernet converters at more than a dozen substations (the converters are used to process commands sent from the SCADA network to the substation control systems). Taking out the converters would prevent operators from sending remote commands to re-close breakers once a blackout occurred. "Operation-specific malicious firmware updates [in an industrial control setting] has never been done before," Lee says. "From an attack perspective, it was just so awesome. I mean really well done by them." The same model of serial-to-Ethernet converters used in Ukraine are used in the US power-distribution grid.
Armed with the malicious firmware, the attackers were ready for their assault.
Sometime around 3:30 p.m. on December 23 they entered the SCADA networks through the hijacked VPNs and sent commands to disable the UPS systems they had already reconfigured. Then they began to open breakers. But before they did, they launched a telephone denial-of-service attack against customer call centers to prevent customers from calling in to report the outage. TDoS attacks are similar to DDoS attacks that send a flood of data to web servers. In this case, the center’s phone systems were flooded with thousands of bogus calls that appeared to come from Moscow, in order to prevent legitimate callers from getting through. Lee notes that the move illustrates a high level of sophistication and planning on the part of the attackers. Cybercriminals and even some nation-state actors often fail to anticipate all contingencies. "What sophisticated actors do is they put concerted effort into even unlikely scenarios to make sure they’re covering all aspects of what could go wrong," he says.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The move certainly bought the attackers more time to complete their mission because by the time the operator whose machine was hijacked noticed what was happening, a number of substations had already been taken down. But if this was a political hack launched by Russia against Ukraine, the TDoS likely also had another goal Lee and Assante say: to stoke the ire of Ukrainian customers and weaken their trust in the Ukrainian power companies and government.
It wasn't enough to plunge customers into the dark–they wanted operators blind, too.
As the attackers opened up breakers and took a string of substations off the grid, they also overwrote the firmware on some of the substation serial-to-Ethernet converters, replacing legitimate firmware with their malicious firmware and rendering the converters thereafter inoperable and unrecoverable, unable to receive commands. “Once you … rewrite the firmware, there's no going back from that [to aid recovery]. You have to be at that site and manually switch operations,” Lee says. "Blowing [these] gateways with firmware modifications means they can't recover until they get new devices and integrate them." After they had completed all of this, they then used a piece of malware called KillDisk to wipe files from operator stations to render them inoperable as well. KillDisk wipes or overwrites data in essential system files, causing computers to crash. Because it also overwrites the master boot record, the infected computers could not reboot.
Some of the KillDisk components had to be set off manually, but Lee says that in two cases the attackers used a logic bomb that launched KillDisk automatically about 90 minutes into the attack. This would have been around 5 p.m., the same time that Prykarpattyaoblenergo posted a note to its web site acknowledging for the first time what customers already knew---that power was out in certain regions---and reassuring them that it was working feverishly to figure out the source of the problem. Half an hour later, after KillDisk would have completed its dirty deed and left power operators with little doubt about what caused the widespread blackout, the company then posted a second note to customers saying the cause of the outage was hackers.
Ukraine's intelligence community has said with utter certainty that Russia is behind the attack, though it has offered no proof to support the claim. But given political tensions between the two nations it's not a far-fetched scenario. Relations have been strained between Russia and Ukraine ever since Russia annexed Crimea in 2014 and Crimean authorities began nationalizing Ukrainian-owned energy companies there, angering Ukrainian owners. Then, right before the December blackout in Ukraine occurred, pro-Ukrainian activists physically attacked substations feeding power to Crimea, leaving two million Crimean residents without power in the region that Russia had annexed, as well as a Russian naval base. Speculation has been rampant that the subsequent blackouts in Ukraine were retaliation for the attack on the Crimean substations.
But the attackers who targeted the Ukrainian power companies had begun their operation at least six months before the Crimean substations were attacked. So, although the attack in Crimea may have been a catalyst for the subsequent attack on the Ukrainian power companies, it's clear that it wasn't the original motivation, Lee says. Lee says the forensic evidence suggests in fact that the attackers may not have planned to take out the power in Ukraine when they did, but rushed their plans after the attack in Crimea.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg "Looking at the data, it looks like they would have benefited and been able to do more had they been planning and gathering intelligence longer," he says. "So it looks like they may have rushed the campaign." He speculates that if Russia is responsible for the attack, the impetus may have been something completely different. Recently, for example, the Ukrainian parliament has been considering a bill to nationalize privately owned power companies in Ukraine. Some of those companies are owned by a powerful Russian oligarch who has close ties to Putin. Lee says it’s possible the attack on the Ukrainian power companies was a message to Ukrainian authorities not to pursue nationalization.
That analysis is supported by another facet of the attack: The fact that the hackers could have done much more damage than they did do if only they had decided to physically destroy substation equipment as well, making it much harder to restore power after the blackout. The US government demonstrated an attack in 2007 that showed how hackers could physically destroy a power generator simply by remotely sending 21 lines of malicious code.
Lee says everything about the Ukraine power grid attack suggests it was primarily designed to send a message. "'We want to be seen, and we want to send you a message,’" is how he interprets it. "This is very mafioso in terms of like, oh, you think you can take away the power [in Crimea]? Well I can take away the power from you." Whatever the intent of the blackout, it was a first-of-its-kind attack that set an ominous precedent for the safety and security of power grids everywhere. The operator at Prykarpattyaoblenergo could not have known what that little flicker of his mouse cursor portended that day. But now the people in charge of the world's power supplies have been warned. This attack was relatively short-lived and benign. The next one might not be.
1 Correction 3/03/16 8:17 a.m. ET: UPS here stands for uninterruptible power supply, not universal power supply.
X X Topics critical infrastructure hacks Ukraine Threat Level Andy Greenberg Lily Hay Newman Lily Hay Newman Lily Hay Newman Andy Greenberg Lily Hay Newman Dhruv Mehrotra Matt Burgess Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
205 | 2,021 | "Online Harassment Toward Women Is Getting Even More Insidious | WIRED" | "https://www.wired.com/story/online-harassment-toward-women-getting-more-insidious" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Nina Jankowicz Ideas Online Harassment Toward Women Is Getting Even More Insidious The disinformation campaign against Vice President Harris during the 2020 election shapeshifted to avoid moderation.
Photograph: Alex Edelman/Bloomberg/Getty Images Save this story Save Save this story Save It was somewhere between the calls to repeal the 19th Amendment and the declarations that I was a traitor who belonged in Guantanamo Bay that the trolls started to wear me down.
Nina Jankowicz , the author of How to Lose the Information War , studies disinformation at The Wilson Center. This essay is based on the new study “ Malign Creativity: How Gender, Sex, and Lies are Weaponized Against Women Online.
” Several days before the onslaught began, I posted a dry Twitter video debunking a conspiratorial narrative that was gaining prominence among Trump supporters. The next week, while sitting in the waiting room of my doctor’s office, my iPhone grew hot as it processed a stream of tweets and direct messages telling me “Islam was right about women,” criticizing the size of my breasts, my chin dimple, and the symmetry of my face. According to the trolls, I was an “affluent white female liberal,” or “AWFL,” and part of a CIA psyop. The guest room that has served as my office since March, where I filmed the video, was actually a basement in Langley, they said. Next year, I would be “dealt with in the streets.” One tweet read chillingly: “I’d fix her.” When it’s happening to you, online abuse feels like a tornado of thousands of insects that, when swatted, will simply get angrier, or dirt that will get kicked up if you struggle.
I sent hundreds of reports to Twitter during the weeks I was targeted, all in vain. How could the artificial intelligence assisting with content moderation understand that the pictures of empty egg cartons were not nudges to go to the grocery store, but taunts meant to suggest that, as one of my abusers put it, “you birth babies, we build bridges,” and that my birthing years were dwindling? The abuse I experienced—and my near total lack of recourse—is not unique. In fact, on the online misogyny scale, my experience wasn’t even particularly bad. I did not get any rape threats. Unlike more than 668,000 unwitting women, no one—to my knowledge, anyway— created deep fake pornography of me. I was not the subject of an involved sexualized disinformation campaign, the likes of which Vice President Kamala Harris and Representatives Alexandra Ocasio-Cortez and Ilhan Omar have endured.
But all of this is terrifyingly ubiquitous, and its impact on society is sprawling. Just before the United States saw its first woman vice president, treasury secretary, director of national intelligence, and more women and women of color serving in Congress than ever before, these figures were also being targeted for sex-based harassment meant to silence them. Over a two-month period in late 2020, I led a research team monitoring the social media mentions of 13 prominent politicians, including Harris, Ocasio-Cortez, and Omar. We found more than 336,000 instances of gendered and sexualized abuse posted by over 190,000 users. These widespread campaigns represent just a sliver of the abuse that women in public life deal with on a daily basis in the internet era.
Over half of the research subjects were also targeted with gendered and sexualized disinformation, a subset of online abuse that uses false or misleading sex-based narratives against women, often with some degree of coordination. These campaigns typically aim to deter women from participating in the public sphere. One such narrative suggested that several targets were secretly transgender. It implied not only that transgender individuals are inherently deceptive, but that this deception is responsible for the power and influence that women like Harris, Ocasio-Cortez, or New Zealand prime minister Jacinda Ardern hold. Women of color were subject to compounded attacks, playing to two of America’s greatest weaknesses: its endemic racism and misogyny.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The social media platforms, for their part, have not created infrastructures that support women enduring harassment and disinformation campaigns. Instead, they have created environments to cater to the needs and challenges that white, cisgender men face. They may as well adopt my abusers’ refrain— “If you can’t stand the heat, get out of the kitchen.” Platforms like Facebook and Twitter force women to report individual instances of harassment and disinformation, only to have them denied or ignored, despite the very real harm they inflict on victims’ lives and reputations. While platforms have improved at detecting some blatant gendered abuse—think of the top five profanities related to female body parts—they have been caught flat-footed at the burgeoning malign creativity that abusers employ. Harassers recognize that certain words and phrases might trigger platforms’ detection mechanisms, and so they use coded language, iterative, context-based visual and textual memes, and other tactics to avoid automated removal. The egg carton meme I received is just one example.
A more notable example of malign creativity is the sexualized disinformation campaign against Vice President Harris during the 2020 election. She was targeted with various coded, derogatory nicknames, slogans, and visuals that shapeshifted to avoid moderation; the social media platforms could not keep up with the changes fast enough to quash the demeaning, spurious content. We found more than 260,000 instances of such abuse—over 78 percent of all the data we collected—in the two months we monitored conversations about her on Twitter, Reddit, Gab, 4chan, 8kun, and Parler.
The effects of these campaigns are broad, impacting women themselves, the tone of their engagement in public life, and the functioning of representative democracy. Women interviewed as part of our study described the campaigns against them as “a tsunami,” “terrorism,” and like “someone had put me in a dryer and ... left it on high for two days.” One interviewee noted that when she is the subject of online harassment, she disengages and self-censors. “You don't feel safe to continue speaking,” she says, “so you don't speak.” This is an impediment to women’s participation in a variety of fields for which public engagement is part of the job description. Further, when women see that even their most powerful and successful counterparts are forced to wade through vile online misogyny, it makes them question whether publishing, speaking out, or running for office is worth the burden. It is time to reverse this trend by employing creativity and technological prowess to make a pariah of online misogyny.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Like the rest of society in the post #MeToo era, social media platforms must decisively make the shift toward believing women. Rather than relying on AI, which doesn’t capture the nuance of many taunts, and one-off reports, which don’t communicate the full user experience, platforms must transition to incident-based report systems. This would allow targets to highlight the inciting piece of content that led to their abuse, such as a tacit pile-on instruction from a high-follower account. It would also allow platforms to continually update the classifiers that help them identify abusive content, making it increasingly less likely that women in public life are forced to endure it as a cost of their participation.
Meanwhile, Congress should reauthorize the Violence Against Women Act (VAWA) and include provisions against online gender-based harassment. The 2019 VAWA Reauthorization Act never received a vote in the Senate, leaving its crucial protections for victims of gender-based violence lapsed. When the new Congress considers VAWA reauthorization, lawmakers should add provisions to support targets of online gender-based harassment, including budgetary allocations to build law enforcement awareness about sexist threats online. Congress also needs to set an example, not only by calling out gender-based abuse and harassment when they see or experience it, but also by not engaging in it themselves including by not sharing gendered disinformation or slurs. Members of the House of Representatives are already prohibited from posting “visual misrepresentations of other people, including but not limited to deep fake technology,” and may not “disparage” other Members, including through ad hominem attacks in official communications. But given the widespread nature of attacks against women in politics and the downstream effect they have on recruitment to the political fold, Congress must develop more detailed standards for decorum around gender issues.
This problem requires action at lower levels, too; organizations should develop support policies for employees and affiliates such as freelancers facing online harassment and abuse. For many public-facing industries, including the media, academia, think tanks, and government, engagement on social media is critical to both brand and individual success. Many organizations have policies relating to affiliates’ use of such technologies, but far fewer have support mechanisms for those undergoing online abuse as a result of their work-related online engagement. Employers should consider providing mental health services, support for affiliates’ legal fees and other expenses, such as anti-doxxing service subscriptions. They should also outline clear mechanisms for targets to report such campaigns against them to official communications and human resources staff.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg As I have documented and written about the gendered harassment to which I and others have been subject, more has come my way. This work will likely generate some too. But we must keep speaking up; women deserve to run for office, do their jobs, and express their opinions without facing abusers aiming to detract from their prowess, expertise, or ability, while social media platforms turn a blind eye. Yes, this is one of social media’s many “hard problems,” but it is one we must address to build a world that is more equitable, more representative, and more just.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here , and see our submission guidelines here.
Submit an op-ed at [email protected].
📩 Want the latest on tech, science, and more? Sign up for our newsletters ! Your body, your self, your surgeon, his Instagram The plan to build a global network of floating power stations What Hades can teach us about ancient Greek masculinity The SolarWinds hackers used tactics other groups will copy The best cheap phones for (almost) every budget 🎮 WIRED Games: Get the latest tips, reviews, and more 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and Bluetooth speakers Topics Wired Opinion content moderation trolls Meghan O'Gieblyn Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
206 | 2,023 | "The Impossible Fight to Stop Canada’s Wildfires | WIRED" | "https://www.wired.com/story/canada-wildfires-future" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Omar Mouallem Science The Impossible Fight to Stop Canada’s Wildfires Photograph: DARREN HULL/Getty Images Save this story Save Save this story Save Canadian firefighter Scott Rennick knew this summer would be bad. It was May 2023 and Rennick was commanding one of British Columbia’s six incident management teams, or IMTs, specialized crews tasked with managing the most complex fires. His 18-person crew had just arrived in the northeast city of Fort Saint John to fight an aggressive bushfire. The province’s wildfire service was still in the midst of hiring, training, and recruiting when the human-caused fire was discovered on Saturday, May 13. By Sunday, flames had spread over 7,000 acres. By Monday morning, it had multiplied fivefold and now covered an area roughly the size of Staten Island. But the worst was yet to come.
Drought had already rendered the land hot and dry. The third ingredient for a natural-disaster-level fire was wind. That came Monday afternoon as a cold front pushed directly into its path, creating gusty 25 mph winds. In a few hours, the blaze spread 9 miles in various directions, approaching Fort Saint John, British Columbia’s oil and gas capital with a population of 21,000.
Rennick says the terrifying glory of a firestorm—ferocious fires fueled by powerful winds drawn into the flames—never ceases to amaze, even after 30 years on the job. It sounded and moved like a freight train, sucking up tens of thousands of pounds of oxygen as it swallowed everything in sight. For 18 straight hours, Rennick and his crew fought alongside dozens of firefighters and heavy equipment operators to create firebreaks wide enough to catch flying embers. Then, exhausted, they rested.
At the ad hoc incident command post, Rennick looked up the three-month forecast on his laptop. Western Canada was covered by a deep red blob—low precipitation, warm temperatures. Later, as the commander relayed the weather report to his crew, someone asked him how many deployments he predicted that season. A typical summer is four. Rennick held up six fingers. “Hopefully I’m wrong,” he added.
As of this week, Rennick’s crew were returning home from their fifth deployment, tackling one of 1,050 active wildfires in Canada—fires becoming bigger, hotter, longer lasting , and more frequent than ever before. He’s already gearing up for his sixth deployment, and with up to six weeks left in the wildfire season, a seventh is likely.
Rennick, who grew up in the city of Vernon in British Columbia, has battled fires most of his life—as did his father and grandfather. “This is just a very different environment we find ourselves in now,” he says. “People who don’t believe in climate change can come talk to me.” At the time of writing, British Columbia is in the midst of a province-wide state of emergency. Up to 200 buildings are estimated to have been destroyed by wildfires in the Okanagan region. And the fires are still burning. “In that kind of extreme situation, it’s no different than trying to put your hand in front of a tsunami or a hurricane and say, ‘Stop,’” says Rennick. Two years ago, during a record-breaking heat wave, he watched a grassfire engulf the town of Lytton , annihilating it in 23 minutes. And yet the intensity and frequency of this summer has exceeded anything Rennick thought possible.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg An aerial view shows charred remains on the side of the road beside the highway in Enterprise, Northwest Territories, Canada, on August 20, 2023. Enterprise and Hay River were put on evacuation orders prior to the city of Yellowknife.
Photograph: ANDREJ IVANOV/Getty Images In June, Rennick’s second deployment of summer 2023 took him to the town of Edson in Alberta, where his IMT joined hundreds of firefighters from as far afield as Australia in a battle against a campaign fire. This type of inferno, once rare but now alarmingly common, is so large and so powerful that it can take months of aerial and ground operations to contain. Campaign fires can even survive a Canadian winter, smoldering under the snow as temperatures fall to minus 60 degrees Celsius before bursting back to life as zombie fires in the spring. At one point, smoke from the fire near Edson formed pyrocumulonimbus clouds that injected a plume of soot 31 miles up into the stratosphere, which then traveled around the globe.
Formed by only the extreme wildfire conditions, pyrocumulonimbus clouds , or pyroCbs, are a firefighting nightmare. They can generate lightning, thus igniting more fires; they create windstorms that spread the blaze; and, though rarely, they can create “ firenados ,” pyrogenic vortex columns that can reach heights of 3,000 feet and speeds up to 140 mph. Like campaign fires, pyroCbs were a once novel phenomena more often associated with volcanic eruptions. The US Naval Research Laboratory only started tracking them a decade ago. A typical year sees 40 or 50 worldwide. The previous record, set in 2021, was 100. By August of 2023, Canada alone had generated 133 of 153 pyroCbs observed by the NRL. “This,” says Rennick, “is the most unprecedented season in the history of the country.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Unprecedented doesn’t even begin to describe what Canada is up against. Close to 6,000 fires have scorched 34 million acres, an area the size of New York State, according to the Canadian Interagency Forest Fire Centre. That’s three times bigger than anything ever recorded in the US, and 10 times the 10-year average for Canada, which, historically speaking, was already well above average. Over 150,000 Canadians are currently displaced, including two-thirds of the population of Northwest Territories and, at the time of writing, 35,000 people in British Columbia.
The speed of change is being driven by a warming world. A warmer world means more moisture is sucked from the ground, resulting in drier fuels. The drier the fuel, the easier it is for a fire to start and spread and burn with greater intensity. That moisture being sucked from the ground also creates more thunderstorms. With this comes more lightning, which is responsible for starting half of Canada’s wildfires. These fires, due to their remoteness, account for 90 percent of the area burned in Canada. Many of these fires aren’t just hard to reach—they’re hard to even detect. This gives them more time to spread, with many fires in remote areas of Canada monitored rather than extinguished.
So much of Canada is burning, and so quickly, that the seed banks needed for forest regeneration work could be stripped bare within years. But disappearing forests won’t mean fewer wildfires, as repeatedly scorched land will become fire-prone grasslands and shrublands. If this summer becomes the norm, rather than the exception, the ecosystems that humans depend on for clean water, pollinators, and food will be altered—with unknown consequences. “We don't really know where it’s going, and that is very frightening,” says Daniel Perrakis, a fire research scientist with Natural Resources Canada.
People sit in Bryant Park amid a smoky haze from wildfires in Canada on June 7, 2023 in New York City.
Photograph: David Dee Delgado/Getty Images Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With fire comes smoke. As wildfires have torn through Canada, much of North America has choked under a cloud of noxious air. To date, Canada’s fires this summer have emitted 1 billion metric tons of carbon dioxide, a quarter of total global fire-carbon emissions so far this year. This has made Canada, a country of 40 million people, the world’s fourth biggest polluter. In 2022, Canada ranked 10th on that list.
That smoke affects the health and well-being of the whole planet. But it poses a particular risk to those living in some of North America’s most populous cities, who have experienced unprecedented levels of air pollution in recent months. As a result, Canada’s firefighting competence is being scrutinized as nations deploy hundreds of their own firefighters to the country, while also pressuring Ottawa to get a grip on the crisis.
In June, the east coast of North America struggled through days of dense smoke pushed south from Quebec. Toronto’s CN Tower vanished, and Manhattan was painted a dense, Blade Runner -esque orange. The smog was so thick that the Port Authority of New York and New Jersey limited driving speeds on bridges.
The smoke, which originated from fires burning 750 to the miles north, soon covered an area of North America home to more than 145 million people. Across much of the Northeast of the US and southeastern Canada, the Air Quality Index, which government agencies use to measure pollutant levels and health risks, shot from between 50 and 70, a healthy to normal range, to over 400—on a scale that maxes out at 500. Emergency health warnings persuaded many people to stay indoors. Stock prices at air filtration manufacturers rose by as much as 15 percent.
The wildfire smoke from Quebec sat in the atmosphere for weeks, spanning the Atlantic Ocean and dimming skies as far away as Portugal. For many on the east coast of North America, the orange skies of June were a wake-up call. For more than a decade, increasingly severe wildfires had ravaged North America’s West , from California in the south to Alaska in the north. Now, the age of flames had arrived in the east.
When it comes to wildfire smoke, the biggest danger is PM 2.5, the fine particulate matter that gives the sky a haunting orange hue. “When you breathe these very small particles, they can make it deep into the lungs, right down to the alveolar oxygen exchange region,” says Sarah Henderson, the science director for the National Collaborating Centre for Environmental Health in Canada, who’s been studying the health effects of wildfire smoke for more than 20 years. “Then we get inflammation that can affect all organ systems in the body.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg There are still many unknowns about the long-term consequences of inhaling wildfire particles, Henderson says, in part because sustained exposure is a relatively new phenomenon.
A 2022 Lancet article examining Canadian data over 20 years linked wildfire exposure to slightly higher rates of lung cancer and brain tumors; however, the researchers said more data was needed.
Still, there’s reason to believe that even occasional exposure can have repercussions that last a lifetime, especially in children. By examining the long-term effects of other sources of air pollution, Henderson says there’s reason to believe that wildfire smoke might affect respiratory, neurological, and prenatal health.
A Stanford Medicine study of children from Fresno, California, who were exposed to smoke from two large wildfires in 2014 found negative effects on immunity-related blood cells and genes.
“We have to go into every wildfire season with the idea that it might be the worst season we've ever seen—and that includes both wildfire risk and smoke,” says Henderson. “That is the reality of the changing climate and the wildfire regime in Canada.” To that end, she thinks officials may need to consider rewriting building codes to insulate against indoor smoke penetration.
Homes sit on Kalamalka Lake in smoke while wildfires continue to burn in Lake County, British Columbia, and surrounding regions on August 20, 2023.
Photograph: PAIGE TAYLOR WHITE/Getty Images Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Mike Flannigan, science director of the Canadian Partnership for Wildland Fire Science and a professor at Thompson Rivers University in British Columbia, calls this summer a “wake-up year” for Canada, which has struggled to curb its wildfire crisis due to a lack of a nationwide strategy, funding, and political willpower. “We're going into uncharted territory. And we're going faster than I would have thought possible,” he says.
As a result, Canada needs firefighters—lots of them. With 5,500 wildland firefighters, about 5.5 per fire at the moment, Canada has called in international fire crews from the southern hemisphere and the Pacific Northwest of the US to help. But even with the assistance of thousands of foreign firefighters, Canada has struggled to procure enough air tankers to water-bomb new fires in the critical first few hours. And so the officials tasked with fighting Canada’s wildfires have been forced to choose, allowing many fires to spread unless they pose an immediate threat to human life or critical infrastructure.
This week the Globe and Mail reported that Canada’s foreign workforce was 680 firefighters, down from 1,754 in July. That fall has been attributed to contracts expiring, but also to firefighters needing to return home to fight fires raging in their own countries. Flannigan believes Canada needs to hire 2,500 more wildland firefighters within its borders to meet current needs. But this is an industry plagued by high attrition rates, due to mental burnout and a predominantly seasonal and volunteer workforce. A 2016 report from Ontario FireRanger, the province’s wildland firefighters, found that the organization was “stuck in a cycle of continuously reiterating basic training” due to high turnover.
Things haven’t improved much since. On top of working 12 to 16 hour days for weeks on end, this year Canadian firefighters have faced extraordinary danger. On-site fatalities are almost unheard of in Canada, but four firefighters have died this summer, including two in British Columbia. Rennick says the unrelenting season has made it difficult for his colleagues to process the emotional toil. “Once myself and my colleagues stop and they go back to their regular jobs or part-time jobs, the full gravity of the season will hit us,” he says.
In a normal year, Rennick would expect a staff turnover of 20 percent, but next year will certainly be higher. To that end, Natural Resources Canada recently allocated CA$37.9 million ($27.9million) to recruiting, training, and retaining firefighters in high-risk zones. But several experts and politicians , including Flannigan, want federal officials to go further and are calling for the creation of a national firefighting service.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg At present, there’s no single national strategy guiding wildfire management in Canada. A patchwork of provincial, territorial, and national park units instead share resources via the Canadian Interagency Forest Fire Centre. But the center, founded in 1982, has been overwhelmed by the scale of the current crisis. Historically, Canada’s wildfires were staggered across both time and geography. Now, huge fires are occuring well out of season and in regions previously less affected, including the Maritime Provinces and Northern Quebec and Ontario, all while the country struggles to increase and update its fleet of aging water bombers.
“We're going into uncharted territory. And we're going faster than I would have thought possible.” Mike Flannigan, science director of the Canadian Partnership for Wildland Fire Science Coordination between woodland and urban firefighters is another challenge. In 2016, a fire at Fort McMurray in Alberta showed what happens when communication breaks down. The Beast, as it came to be known, took locals by surprise, resulting in a last-minute evacuation of 88,000 people on a single highway through flames and embers.
An inquiry into the most expensive natural disaster in Canadian history reported that local and provincial authorities weren’t even sharing the same radio frequencies. “This was particularly problematic when it came to air attack,” the report found. “Alberta Forestry aircraft had no way to forward a direct message to municipal firefighters.” When the fire did reach the city, local emergency management learned about it from social media.
Such catastrophes, combined with this record-breaking summer, have also led Canada to consider the creation of a bureau similar to America’s Federal Emergency Management Agency (FEMA). In June, Bill Blair, then Canada’s emergency preparedness minister, told the CBC that his government had begun discussions with the head of FEMA about creating a Canadian equivalent, as well as a joint agency similar to the North American Aerospace Defense Command, or NORAD, to manage cross-border emergencies—including wildfires.
Public Safety Canada, the country’s closest equivalent to FEMA, struggles to address large-scale events because of its broad focus, of which only a small part is dedicated to emergency management. The agency spends just $4.70 per Canadian for national emergencies, compared to FEMA’s budget of $87.87 per American. Public Safety Canada’s primary role this summer has been to deploy the armed forces to assist in building fire breaks and assist in evacuations.
Wildfires have long been a part of the Canadian landscape, but urban development over the past 70 years, especially in the west, have created a new problem. Today, more people than ever are living right next to nature, with forests butting right up against new urban developments. The staggering destruction and death toll of fires in Paradise, California, in 2018, and this month’s tragedy on Maui, were partly attributed to the intermingling of urban development and vegetative fuels.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Despite its vast size, Canada faces similar problems. “We’re reaching a point where creation of some agency like FEMA has become a necessity,” says Ali Asgard, a disaster and emergency management professor at York University in Toronto. He adds that Public Safety Canada, or perhaps a future emergency management agency, also needs to do more to prepare communities for managing hazardous pollution levels. As smoke and flames cross the southern border, pressure is mounting on Canadian officials to ensure there isn’t a repeat of this summer—or worse.
Like the climate crisis itself, managing the wildfire crisis is politically complex. Though fire suppression tactics have changed over the past two generations, Canada is currently dealing with a fire deficit of 100 to 150 years. This has created an oversupply of tinder that should have been cleared long ago by healthier fire cycles.
Fire plays an important ecological role in the dense, carbon-packed boreal forests that cover more than half of Canada and 14 percent of the world, something many Indigenous people have long understood. Fires can help reduce pest infestations, open water channels, and improve soil health. But Canada’s woodlands changed with industrial techniques that extinguished fires with full force, resulting in more overgrown, homogenous, and flammable landscapes.
Some of the most effective prevention techniques are also highly unpopular, such as preemptive fire bans and forest closures during high-risk periods, because they interfere with camping, hunting, and other recreational activities. Even more controversial is the tactic of prescribed fires—literally fighting fire with fire, by ridding forests of flammable underbrush during low-risk times, or back-burning during active fires to prevent wildfires from spreading.
Controlled burns can be politically challenging, especially during an active firefight, when the public is sick of breathing smoke or concerned about prescribed fires going rogue. But, explains Amy Cardinal Christianson, a Metis scientist and Parks Canada’s Indigenous fire specialist, controlled burns are one of our best tools, because they essentially replace fires of chance with fires of choice.
A resident collects supplies after a supply drop during a wildfire in the evacuated town of Scotch Creek, British Columbia, Canada, on Sunday, Aug. 20, 2023.
Photograph: Cole Burston/Bloomberg/Getty Images Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Cardinal Christianson’s role at Parks Canada, the federal agency charged with protecting the country’s natural and cultural heritage, is to work on partnering with Indigenous communities to apply burning practices that have been suppressed by colonization. “Indigenous people have always been trying to push to be more involved in fire management in Canada and especially in having cultural fire on the landscape,” says Cardinal Christianson.
Such practices are often family-oriented, involving children and elders, and range from burning the underbrush while there’s still snow on the ground to burning an overgrown bush to protect a berry patch. Since colonization, government regulators have suppressed much of this knowledge, but many First Nations have never stopped treating the land with fire—they just went underground.
Today, many of Canada’s Indigenous people are frustrated with governments’ “two-tiered system,” which often prohibits cultural burning while appropriating Indigenous fire knowledge for use on massive prescribed fires. “There’s a lot of concern that agencies will come and extract the knowledge that they want and put it into their agency practices, but then Indigenous people still won’t be at the table,” says Cardinal Christianson.
The frustration among Indigenous Canadians is amplified by the disproportionate impact of wildfires on their communities. A recent study by Cardinal Christianson and her colleagues examined Canadian evacuations from all causes spanning 1980 to 2021. The researchers found that 37 percent of people on First Nations reserves have already survived at least one wildfire evacuation. Moreover, Indigenous communities comprised nearly half of all fire evacuations in Canada, and nearly all smoke evacuations, since 1980.
Across North America, many people are only now beginning to understand the threats from fire and smoke that people in the other half of the continent have faced for centuries, but with increasing and intensifying regularity. The prognosis is grim. More land will burn, more people will be displaced, many more again will breathe toxic air. But, beyond that, says Flannigan, if fires on this scale continue, the forest they are burning through will soon vanish entirely.
Yet the fires that have burned across Canada this summer, and continue to burn, won’t become the “new normal,” says Flannigan. Instead, he says, things will only get worse. “I often use Dante’s circles of hell,” he says. “I’m not sure what circle we're on—three, four?—but there's more circles below us, and that's where we're going.” This summer has made the urgency of the situation unavoidably clear: Canada, and the world, needs a plan—and fast.
Update 8-24:2023 2:17 PM ET: This story has been updated to correct the figure for metric tons of carbon dioxide emitted by Canada's fires this summer.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Freelance Journalist Topics wildfires environment climate change climate disasters extreme heat Matt Simon Matt Simon Ramin Skibba Grace Browne Amit Katwala Ramin Skibba Jim Robbins Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
207 | 2,022 | "New Lapsus$ Hack Documents Make Okta’s Response Look More Bizarre | WIRED" | "https://www.wired.com/story/lapsus-okta-hack-sitel-leak" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Lily Hay Newman Security Leaked Details of the Lapsus$ Hack Make Okta’s Slow Response Look More Bizarre Photograph: Sundry Photography/Alamy Save this story Save Save this story Save In the week since the digital extortion group Lapsus$ first revealed that it had breached the identity management platform Okta through one of the company's subprocessors, customers and organizations across the tech industry have been scrambling to understand the true impact of the incident. The subprocessor, Sykes Enterprises, which is owned by the business services outsourcing company Sitel Group, confirmed publicly last week that it suffered a data breach in January 2022. Now, leaked documents show Sitel's initial breach notification to customers, which would include Okta, on January 25, as well as a detailed “Intrusion Timeline” dated March 17.
The documents raise serious questions about the state of Sitel/Sykes' security defenses prior to the breach, and they highlight apparent gaps in Okta's response to the incident. Sitel declined to comment about the documents, which were obtained by independent security researcher Bill Demirkapi and shared with WIRED.
Okta said in a statement, “We are aware of the public disclosure of what appears to be a portion of a report Sitel prepared regarding its incident. … Its content is consistent with the chronology we have disclosed regarding the January 2022 compromise at Sitel.” The company added, "Once we received this summary report from Sitel on March 17, we should have moved more swiftly to understand its implications. We are determined to learn from and improve following this incident." When the Lapsus$ group published screenshots claiming it had breached Okta on March 21, the company says that it had already received Sitel's breach report on March 17. But after sitting with the report for four days, Okta seemed to be caught flat-footed when the hackers took the information public. The company even initially said , “The Okta service has not been breached.” WIRED has not seen the complete report, but the "Intrusion Timeline" alone would presumably be deeply alarming to a company like Okta, which essentially holds the keys to the kingdom for thousands of major organizations. Okta said last week that the “maximum potential impact” of the breach reaches 366 customers.
The timeline, which was seemingly produced by security investigators at Mandiant or based on data gathered by the firm, shows that the Lapsus$ group was able to use extremely well known and widely available hacking tools, like the password-grabbing tool Mimikatz , to rampage through Sitel's systems. At the outset, the attackers were also able to gain enough system privileges to disable security scanning tools that might have flagged the intrusion sooner. The timeline shows that attackers initially compromised Sykes on January 16 and then ramped up their attack throughout the 19th and 20th until their last login on the afternoon of the 21st, which the timeline calls “Complete Mission.” “The attack timeline is embarrassingly worrisome for Sitel group,” Demirkapi says. “The attackers did not attempt to maintain operational security much at all. They quite literally searched the internet on their compromised machines for known malicious tooling, downloading them from official sources.” With just the information Sitel and Okta have described having right away at the end of January, though, it is also unclear why the two companies do not seem to have mounted more expansive and urgent responses while Mandiant's investigation was ongoing. Mandiant also declined to comment for this story.
Okta has said publicly that it detected suspicious activity on a Sykes employee’s Okta account on January 20 and 21 and shared information with Sitel at that time. Sitel's “Customer Communication” on January 25 would have seemingly been an indication that even more was awry than Okta previously knew. The Sitel document describes "a security incident … within our VPN gateways, Thin Kiosks, and SRW servers." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Sitel's notification does, however, seemingly attempt to downplay the severity of the incident. The company wrote at the time (emphasis theirs), "we remain confident that there are no Indicators of Compromise (IoC) and there is still no evidence of malware, ransomware, or endpoint corruption.
" The Lapsus$ hackers have been rapidly ramping up their attacks since they came on the scene in December. The group has targeted dozens of organizations in South America, the United Kingdom, Europe, and Asia and stole source code and other sensitive data from companies like Nvidia, Samsung, and Ubisoft. They do not spread ransomware, instead threatening to leak stolen information in apparent extortion attempts. At the end of last week, City of London police arrested seven people, ages 16 to 21, in connection with Lapsus$, but reportedly released all seven without charges. In the meantime, the group's Telegram channel has remained active.
Demirkapi says that the leaked documents are confounding and that both Okta and Sitel need to be more forthcoming about the sequence of events.
“We take our responsibility to protect and secure our customers' information very seriously,” Okta chief security officer David Bradbury wrote last week. “We are deeply committed to transparency and will communicate additional updates when available.” Updated Tuesday March 19, 2022 at 9:15am ET to include comment from Okta.
📩 The latest on tech, science, and more: Get our newsletters ! The infinite reach of Facebook's man in Washington Of course we're living in a simulation A big bet to kill the password for good How to block spam calls and text messages The end of infinite data storage can set you free 👁️ Explore AI like never before with our new database ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers Senior Writer X Topics security hacking leaks vulnerabilities Lily Hay Newman Andrew Couts David Gilbert Andy Greenberg David Gilbert Andy Greenberg David Gilbert Justin Ling Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
208 | 2,023 | "The Comedy of Errors That Let China-Backed Hackers Steal Microsoft’s Signing Key | WIRED" | "https://www.wired.com/story/china-backed-hackers-steal-microsofts-signing-key-post-mortem" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Lily Hay Newman Security The Comedy of Errors That Let China-Backed Hackers Steal Microsoft’s Signing Key Photograph: Yaroslav Kryuchka/Getty Images Save this story Save Save this story Save Microsoft said in June that a China-backed hacking group had stolen a cryptographic key from the company's systems. This key allowed the attackers to access cloud-based Outlook email systems for 25 organizations, including multiple US government agencies. At the time of the disclosure, however, Microsoft did not explain how the hackers were able to compromise such a sensitive and highly guarded key, or how they were able to use the key to move between consumer- and enterprise-tier systems. But a new postmortem published by the company on Wednesday explains a chain of slipups and oversights that allowed the improbable attack.
Such cryptographic keys are significant in cloud infrastructure because they are used to generate authentication “tokens” that prove a user’s identity for accessing data and services. Microsoft says it stores these sensitive keys in an isolated and strictly access-controlled “production environment.” But during a particular system crash in April 2021, the key in question was an incidental stowaway in a cache of data that crossed out of the protected zone.
Pressure Test Lily Hay Newman You’ve Got Mail Andy Greenberg Signed and Delivered Andy Greenberg “All the best hacks are deaths by 1,000 paper cuts, not something where you exploit a single vulnerability and then get all the goods,” says Jake Williams, a former US National Security Agency hacker who is now on the faculty of the Institute for Applied Network Security.
After the fateful crash of a consumer signing system, the cryptographic key ended up in an automatically generated “crash dump” of data about what had happened. Microsoft's systems are meant to be designed so signing keys and other sensitive data don't end up in crash dumps, but this key slipped through because of a bug. Worse still, the systems built to detect errant data in crash dumps failed to flag the cryptographic key.
With the crash dump seemingly vetted and cleared, it was moved from the production environment to a Microsoft “debugging environment,” a sort of triage and review area connected to the company's regular corporate network. Once again though, a scan designed to spot the accidental inclusion of credentials failed to detect the key's presence in the data.
Sometime after all of this occurred in April 2021, the Chinese espionage group, which Microsoft calls Storm-0558, compromised the corporate account of a Microsoft engineer. According to Microsoft, that target engineer's account was itself compromised with a stolen access token obtained from a machine infected with malware, though it hasn't shared how that infection occurred.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With this account, the attackers could access the debugging environment where the ill-fated crash dump and key were stored. Microsoft says it no longer has logs from this era that directly show the compromised account exfiltrating the crash dump, “but this was the most probable mechanism by which the actor acquired the key.” Armed with this crucial discovery, the attackers were able to start generating legitimate Microsoft account access tokens.
Another unanswered question about the incident had been how the attackers used a cryptographic key from the crash log of a consumer signing system to infiltrate the enterprise email accounts of organizations like government agencies. Microsoft said on Wednesday that this was possible because of a flaw related to an application programming interface that the company had provided to help customer systems cryptographically validate signatures. The API had not been fully updated with libraries that would validate whether a system should accept tokens signed with consumer keys or enterprise keys, and as a result, many systems could be tricked into accepting either.
The company says it has fixed all of the bugs and lapses that cumulatively exposed the key in the debugging environment and allowed it to sign tokens that would be accepted by enterprise systems. But Microsoft's recap still does not fully describe how attackers compromised the engineer's corporate account—such as how malware capable of stealing an engineer's access tokens ended up on its network—and Microsoft did not immediately respond to WIRED's request for more information.
The fact Microsoft kept limited logs during this time period is significant, too, says independent security researcher Adrian Sanabria. As part of its response to the Storm-0558 hacking spree overall, the company said in July that it would expand the cloud logging capabilities that it offers for free. “It's particularly notable because one of the complaints about Microsoft is that they don't set up their own customers for security success,” Sanabria says. “Logs disabled by default, security features are an add-on requiring additional spending, or more premium licenses. It appears they themselves got bit by this practice.” As Williams from the Institute for Applied Network Security points out, organizations like Microsoft must face highly motivated and well-resourced attackers who are unusually capable of capitalizing on the most esoteric or improbable mistakes. He says that from reading Microsoft's latest updates on the situation, he is more sympathetic to why the situation played out the way it did.
“You'll only hear about highly complex hacks like this in an environment like Microsoft's,” he says. “In any other organization, the security is relatively so weak that a hack doesn't need to be complex. And even when environments are pretty secure, they often lack the telemetry—along with the retention—needed to investigate something like this. Microsoft is a rare organization that has both. Most organizations wouldn't even store logs like this for a few months, so I'm impressed that they had as much telemetry as they did." Update 9:55 am, September 7, 2023: Added new details about how the attackers compromised a Microsoft engineer's account, which made theft of the signing key possible.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics hacks cryptography cybersecurity vulnerabilities Microsoft Lily Hay Newman Lily Hay Newman Matt Burgess Lily Hay Newman Kate O'Flaherty Andy Greenberg David Gilbert David Gilbert Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
209 | 2,023 | "How China Demands Tech Firms Reveal Hackable Flaws in Their Products | WIRED" | "https://www.wired.com/story/china-vulnerability-disclosure-law" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Andy Greenberg Security How China Demands Tech Firms Reveal Hackable Flaws in Their Products PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Save this story Save Save this story Save For state-sponsored hacking operations, unpatched vulnerabilities are valuable ammunition. Intelligence agencies and militaries seize on hackable bugs when they're revealed—exploiting them to carry out their campaigns of espionage or cyberwar—or spend millions to dig up new ones or to buy them in secret from the hacker gray market.
But for the past two years, China has added another approach to obtaining information about those vulnerabilities: a law that simply demands that any network technology business operating in the country hand it over. When tech companies learn of a hackable flaw in their products, they’re now required to tell a Chinese government agency—which, in some cases, then shares that information with China's state-sponsored hackers, according to a new investigation. And some evidence suggests foreign firms with China-based operations are complying with the law, indirectly giving Chinese authorities hints about potential new ways to hack their own customers.
Today, the Atlantic Council released a report —whose findings the authors shared in advance with WIRED—that investigates the fallout of a Chinese law passed in 2021 , designed to reform how companies and security researchers operating in China handle the discovery of security vulnerabilities in tech products. The law requires, among other things, that tech companies that discover or learn of a hackable flaw in their products must share information about it within two days with a Chinese agency known as the Ministry of Industry and Information Technology. The agency then adds the flaw to a database whose name translates from Mandarin as the Cybersecurity Threat and Vulnerability Information Sharing Platform but is often called by a simpler English name, the National Vulnerability Database.
The report’s authors combed through the Chinese government's own descriptions of that program to chart the complex path the vulnerability information then takes: The data is shared with several other government bodies, including China’s National Computer Network Emergency Response Technical Teams/Coordination Center, or CNCERT/CC, an agency devoted to defending Chinese networks. But the researchers found that CNCERT/CC makes its reports available to technology "partners" that include exactly the sort of Chinese organizations devoted not to fixing security vulnerabilities but to exploiting them. One such partner is the Beijing bureau of China's Ministry of State Security, the agency responsible for many of the country's most aggressive state-sponsored hacking operations in recent years, from spy campaigns to disruptive cyberattacks. And the vulnerability reports are also shared with Shanghai Jiaotong University and the security firm Beijing Topsec , both of which have a history of lending their cooperation to hacking campaigns carried out by China's People Liberation Army.
“As soon as the regulations were announced, it was apparent that this was going to become an issue,” says Dakota Cary, a researcher at the Atlantic Council's Global China Hub and one of the report’s authors. “Now we've been able to show that there is real overlap between the people operating this mandated reporting structure who have access to the vulnerabilities reported and the people carrying out offensive hacking operations.” Given that patching vulnerabilities in technology products almost always takes far longer than the Chinese law’s two-day disclosure deadline, the Atlantic Council researchers argue that the law essentially puts any firm with China-based operations in an impossible position: Either leave China or give sensitive descriptions of vulnerabilities in the company’s products to a government that may well use that information for offensive hacking.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The researchers found, in fact, that some firms appear to be taking that second option. They point to a July 2022 document posted to the account of a research organization within the Ministry of Industry and Information Technologies on the Chinese-language social media service WeChat. The posted document lists members of the Vulnerability Information Sharing program that “passed examination,” possibly indicating that the listed companies complied with the law. The list, which happens to focus on industrial control system (or ICS) technology companies, includes six non-Chinese firms: Beckhoff, D-Link, KUKA, Omron, Phoenix Contact, and Schneider Electric.
WIRED asked all six firms if they are in fact complying with the law and sharing information about unpatched vulnerabilities in their products with the Chinese government. Only two, D-Link and Phoenix Contact, flatly denied giving information about unpatched vulnerabilities to Chinese authorities, though most of the others contended that they only offered relatively innocuous vulnerability information to the Chinese government and did so at the same time as giving that information to other countries’ governments or to their own customers.
The Atlantic Council report’s authors concede that the companies on the Ministry of Industry and Information Technology’s list aren’t likely handing over detailed vulnerability information that could immediately be used by Chinese state hackers. Coding a reliable “exploit,” a hacking software tool that takes advantage of a security vulnerability, is sometimes a long, difficult process, and the information about the vulnerability demanded by Chinese law isn’t necessarily detailed enough to immediately build such an exploit.
But the text of the law does require—somewhat vaguely—that companies provide the name, model number, and version of the affected product, as well as the vulnerability's “technical characteristics, threat, scope of impact, and so forth.” When the Atlantic Council report’s authors got access to the online portal for reporting hackable flaws, they found that it includes a required entry field for details of where in the code to “trigger” the vulnerability or a video that demonstrates “detailed proof of the vulnerability discovery process,” as well as a nonrequired entry field for uploading a proof-of-concept exploit to demonstrate the flaw. All of that is far more information about unpatched vulnerabilities than other governments typically demand or that companies generally share with their customers.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Even without those details or a proof-of-concept exploit, a mere description of a bug with the required level of specificity would provide a “lead” for China’s offensive hackers as they search for new vulnerabilities to exploit, says Kristin Del Rosso, the public sector chief technology officer at cybersecurity firm Sophos, who coauthored the Atlantic Council report. She argues the law could be providing those state-sponsored hackers with a significant head start in their race against companies’ efforts to patch and defend their systems. “It’s like a map that says, ‘Look here and start digging,’” says Del Rosso. “We have to be prepared for the potential weaponization of these vulnerabilities.” If China’s law is in fact helping the country’s state-sponsored hackers gain a greater arsenal of hackable flaws, it could have serious geopolitical implications. US tensions with China over both the country’s cyberespionage and apparent preparations for disruptive cyberattack have peaked in recent months. In July, for instance, the Cybersecurity and Information Security Agency (CISA) and Microsoft revealed that Chinese hackers had somehow obtained a cryptographic key that allowed Chinese spies to access the email accounts of 25 organizations, including the State Department and the Department of Commerce. Microsoft, CISA, and the NSA all warned as well about a Chinese-origin hacking campaign that planted malware in electric grids in US states and Guam , perhaps to obtain the ability to cut off power to US military bases.
Even as those stakes rise, the Atlantic Council’s Cary says he’s had firsthand conversations with one Western tech firm on the Ministry of Industry and Information Technology’s list that directly told him it was complying with China’s vulnerability disclosure law. According to Cary, the lead executive for the Chinese arm of the company—which Cary declined to name—told him that complying with the law meant that it had been forced to submit information about unpatched vulnerabilities in its products to the Ministry of Industry and Information Technology. And when Cary spoke to another executive of the company outside of China, that executive wasn’t aware of the disclosure.
Cary suggests that a lack of awareness of vulnerability information shared with the Chinese government may be typical for foreign companies that operate in the country. “If it’s not on executives’ radar, they don’t go around asking if they’re in compliance with the law that China just implemented,” says Cary. “They only hear about it when they’re not in compliance.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Of the six non-Chinese firms on the Ministry of Industry and Information Technology’s list of compliant ICS technology firms, Taiwan-based D-Link gave WIRED the most direct denial, responding in a statement from its chief information security officer for North America, William Brown, that it “has never provided undisclosed product security information to the Chinese government.” German industrial control system tech firm Phoenix Contact also denied giving China vulnerability information, writing in a statement, “We make sure that potential new vulnerabilities are handled with utmost confidentiality and by no means get into the hands of potential cyber attackers and affiliated communities wherever they are located.” Other companies on the list said that they do report vulnerability information to the Chinese government, but only the same information provided to other governments and to customers. Swedish industrial automation firm KUKA responded that it “fulfills legal local obligations in all countries, where we operate,” but wrote that it offers the same information to its customers, publishes known vulnerability information about its products on a public website , and will comply with a similar upcoming law in the EU that requires disclosing vulnerability information. Japanese technology company Omron similarly wrote that it gives vulnerability information to the Chinese government, CISA in the US, and the Japanese Computer Emergency Response Team, as well as publishing information about known vulnerabilities on its website.
German industrial automation firm Beckhoff spelled out a similar approach in more detail. “Legislation in several nations requires that any vendor selling products in their market must inform their authorized body about security vulnerabilities prior to their publication,” wrote Torsten Förder, the company’s head of product security. “General information about the vulnerability is disclosed as further research and mitigation strategies are developing. This enables us to notify all regulatory bodies quickly, while refraining from publishing comprehensive information on how to exploit the vulnerability under investigation.” French electric utility technology firm Schneider Electric offered the most ambiguous response. The company’s head of product vulnerability management, Harish Shankar, wrote only that “cybersecurity is integral to Schneider Electric’s global business strategy and digital transformation journey” and referred WIRED to its Trust Charter as well as the cybersecurity support portal on its website , where it releases security notifications and mitigation and remediation tips.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Given those carefully worded and sometimes elliptical responses, it’s difficult to know to exactly what degree companies are complying with China’s vulnerability disclosure law—particularly given the relatively detailed description required on the government’s web portal for uploading vulnerability information. Ian Roos, a China-focused researcher at cybersecurity R&D firm Margin Research who reviewed the Atlantic Council report prior to publication, suggests that companies might be engaging in a kind of “malicious compliance,” sharing only partial or misleading information with Chinese authorities. And he notes that even if they are sharing solid vulnerability data, it may still not be specific enough to be immediately helpful to China’s state-sponsored hackers. “It’s very hard to go from ‘there's a bug here’ to actually leveraging and exploiting it, or even knowing if it can be leveraged in a way that would be useful,” Roos says.
The law is still troubling, Roos adds, since the Chinese government has the ability to impose serious consequences on companies that don’t share as much information as it would like, from hefty fines to revocation of business licenses necessary to operate in the country. “I don’t think it’s doomsday, but it’s very bad,” he says. “I think it absolutely does create a perverse incentive where now you have private organizations that need to basically expose themselves and their customers to the adversary.” In fact, China-based staff of foreign companies may be complying with the vulnerability disclosure law more than executives outside of China even realize, says J. D. Work, a former US intelligence official who is now a professor at National Defense University College of Information and Cyberspace. (Work holds a position at the Atlantic Council, too, but wasn’t involved in Cary and Del Rosso’s research.) That disconnect isn’t just due to negligence or willful ignorance, Work adds. China-based staff might broadly interpret another law China passed last year focused on countering espionage as forbidding China-based executives of foreign firms from telling others at their own company about how they interact with the government, he says. “Firms may not fully understand changes in their own local offices’ behavior,” says Work, “because those local offices may not be permitted to talk to them about it, under pain of espionage charges.” Sophos’ Del Rosso notes that even if companies operating in China are finding the wiggle room to avoid disclosing actual, hackable vulnerabilities in their products today, that’s still no guarantee that China won’t begin tightening its enforcement of the disclosure law in the future to close any loopholes.
“Even if people aren't complying—or if they are complying but only to a certain extent—it can only devolve and get worse,” says Del Rosso. “There’s no way they’re going to start asking for less information, or requiring less of people working there. They’ll never get softer. They’ll crack down more.” Updated 9:20 am, September 6, 2023: A previous version of this article incorrectly identified Margin Research's Ian Roos. We regret the error.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics cybersecurity vulnerabilities China hacking national security Lily Hay Newman Lily Hay Newman Andy Greenberg Dell Cameron Matt Burgess Matt Burgess Andy Greenberg Dell Cameron Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
210 | 2,018 | "These Wasp-Like Drones Lift Heavy Loads With Their Bellies | WIRED" | "https://www.wired.com/story/wasp-like-drones" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science These Wasp-Like Drones Lift Heavy Loads With Their Bellies Renaud Visage/Getty Images Save this story Save Save this story Save You might know wasps for their ability to brainwash cockroaches or inflict one of the most painful stings on Earth—one so powerful that the actual scientific advice to victims is to just lie down and scream until it passes.
Lesser-known is the wasp’s superlative ability to carry loads that are unexpectedly heavy given the creature’s size.
Small drones, or “micro air vehicles,” are only able to lift the equivalent of their own weight. If we want flying robots that can move massive objects without requiring them to be the size of pterodactyls, engineers will need to come up with new ways of lifting stuff. So drone designers are looking to wasps for help, and developing creative ways to use the environment itself as a secret weapon in robotics.
If a wasp stings and knocks out prey that’s too big for it to fly off with, the predator drags the thing away. It can do this using a structure on its feet called an arolium, a pad that helps them get a grip on a surface. Combined with claws on the feet, the arolium allows wasps to maneuver objects that they can’t outright fly away with. Which means they can punch—or sting—far above their weight class.
Engineers want drones to do the same. So a new class of robots, known as FlyCroTugs, takes a cue from these feisty fliers. On the surface, they look like regular old quadrotors that would fit in your palm. But the secret is hidden away on their bellies. While sitting on the ground, one version of the machine uses hooks to snag bumps and pits to anchor itself to the surface like a wasp’s claws do, while another version uses a pad to stick to a smooth surface. The machines can then use a tiny winch to lift and drag things up to 40 times their own weight.
Kurt Hickman/Stanford News Service Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The physics of the hooks are pretty straightforward—good old anchoring for leverage. “We're just trying to get these hooks lined up, one right next to another, and have them each be able to find their own bump and all pull together to generate larger forces than a single hook could,” says Stanford roboticist Matthew Estrada, who describes the machines today in Science Robotics.
The physics of the pad, on the other hand, are more dazzling. The technology, which is inspired not by wasp feet but gecko feet, isn’t particularly new—Stanford researchers have already used it to, for instance, design a gripper that might one day grab space junk in orbit.
But the resulting forces can also give the FlyCroTug a gecko-like grip and give it the ability to lift like an insect.
EPFL/Laboratory of Intelligent Systems That trick relies on what’s called van der Waals forces. A material on the bottom of the drone is packed with tiny silicone ridges. When it makes contact with a smooth surface, and you tug on it, the ridges align to the surface in a uniform direction. (As you can see in the GIF below.) “They all lay down and create very intimate contact with whatever they're pressed up against,” says Estrada. The contact is so intimate that a minute attraction develops on the molecular level for each ridge. Because there’s so many of them packed into the material, those forces add up to produce excellent adhesion.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Stanford/Biomimetics & Dexterous Manipulation Lab That’s how geckos can manage to walk up a wall, and how the FlyCroTugs can lift 40 times their weight. As long as the robots are sitting stationary on, say, the edge of a table, they can use van der Waals forces to winch objects far heavier than themselves. So lifting a water bottle that’s sitting down on the ground, for instance.
If you wanted to lift something bigger, you could employ several of these tiny robots. That might be more useful than simply scaling the drones up to increase their power. This approach might make them cheaper to manufacture, and allow them to work their way into tight spaces if need be. Who needs bulk when you’ve got numbers? Unlike earlier bio-inspired drones, FlyCroTug doesn't look for inspiration in a wasp merely as a flying animal, but as a larger system. "Flying insects are not just about flying when they carry an object," says Caltech roboticist Soon-Jo Chung, who's developed a bat-inspired drone.
They also drag loads that would otherwise be too much to carry. "That's the very interesting innovation and contribution of this paper." Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In other words, making use of the ground or another feature of the environment can help supercharge new robots. Most bots roll on the ground or fly through the air without interacting much with their surroundings. FlyCroTugs are fundamentally different: They leverage the environment itself to increase their power. A surface isn’t just something to navigate, it’s something to use as a tool for some serious winching.
This new lifting ability isn’t only useful for dragging around big objects. Two robots can also work together to manage a complex manipulation such as opening a door. The first drone wheels into position and extends a spring-loaded hook under the door. The second robot’s own spring-loaded hook snags the handle. Then, braced against the door, the second robot tugs the handle down while the first robot tugs the door open.
Stanford/Biomimetics & Dexterous Manipulation Lab The idea is that groups of adhesive robots can tackle tasks that individual robots might struggle with. “Maybe think of each robot as a move on a chess board,” says Estrada. “How are you going to build up exerting these forces in different directions to attain more dexterous tasks?” Instead of loading complex capabilities into one highly sophisticated and expensive robot, the solution in some cases may be to coordinate multiple bots instead.
Or at some point the researchers could combine the two techniques—hooks for grabbing onto rough materials and pads for smooth ones—in a single drone that works on a wider array of surfaces.
While leaving out the stinger, of course. Let’s go ahead and leave that path unexplored.
Self-improvement in the internet age and how we learn A drone-flinging cannon proves UAVs can mangle planes Google's human-sounding phone bot comes to the Pixel How Jump designed a global electric bike US weapons systems are easy cyberattack targets Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories Staff Writer X Topics robotics biomimicry Rhett Allain Rhett Allain Matt Simon Jorge Garay Matt Simon Matt Simon Matt Simon Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
211 | 2,021 | "The New ‘Matrix Resurrections’ Trailer Reclaims the Red Pill Narrative | WIRED" | "https://www.wired.com/story/matrix-resurrections-trailer" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Angela Watercutter Culture The New Matrix Resurrections Trailer Reclaims the Red Pill Narrative The first trailer for The Matrix Resurrections features familiar faces—and a now-iconic choice.
Courtesy of Warner Bros. Pictures Save this story Save Save this story Save The choice has always been, relatively speaking, simple: red pill or blue pill.
Swallow the red and it’s like eating from the tree of knowledge of good and evil—suddenly all the universe’s dark secrets are revealed. Take the blue, remain in blissful ignorance. In 1999, Laurence Fishburne’s Morpheus presented this option to Keanu Reeves’ Neo, who gulped down the red one with only the slightest trepidation. His narrative arc was changed, and a meme was born.
In the two decades since, the sociopolitical meaning of red pill vs. blue pill has evolved quite a bit. Most recently, the idea of “red-pilling” has become a metaphor for a certain kind of political awakening, an adoption of far-right, and often misogynistic , views. The phrase perhaps reached its nadir last year when Elon Musk sent a tweet encouraging his followers to “take the red pill,” to which then-presidential adviser Ivanka Trump responded “Taken!” Not one to let her work be misconstrued, Lilly Wachowski—who, along with her sister Lana, created the Matrix franchise—quickly responded “ fuck both of you.
” It was one of the first, if not the first, times the movie’s creators expressed discontent at the way their creation had been co-opted by the darker corners of the internet.
Or at least it was until Tuesday, when a mysterious new landing page emerged on WhatIsTheMatrix.com , teasing the trailer that dropped this morning. There they were, right on the landing page: a red pill and a blue one. Click the red, and the voice of Yahya Abdul-Mateen II (who is playing an as-yet-unnamed character, but seems to be filling the Morpheus role this time around) recites the time of day before saying “that couldn’t be further from the truth.” Click the blue, and it’s the voice of Neil Patrick Harris saying “you’ve lost your capacity to discern reality from fiction.” Harris, it seems, is Neo’s therapist, delivering a never-ending stream of blue pills.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In both the website and the new trailer, the pill-popping choice remains prevalent. It’s easy to imagine that in a different time and place, the creators of a franchise would prefer to distance themselves from a creation that has been politicized and turned polarizing. Lana Wachowski, who is helming the upcoming Matrix Resurrections solo, clearly has no interest in that. Instead, she’s presenting the world of the red-pilled as the place where reality is accepted and, seemingly, a group of women and people of color are fighting for a world made in their image.
And what an image it is. For those worried the aesthetic of the franchise would’ve eroded in the years since 2003’s Matrix Revolutions , fear not. The cascading bits of green code are still here, the all-black sartorial choices remain, and—perhaps most importantly—there are lots of motorcycle and car chases and bullet-stopping. (TL;DR: It’s pretty sick.) The new trailer also features the reunion of Neo and Trinity (Carrie-Anne Moss), both of whom seemingly have been living a blue-pill existence, unaware of their previous revolutionary lives until they meet again at a café. It’s telling that it happens this way. At the beginning of the trailer, Neo is once again Thomas Anderson, and now lives and works in the heart of the San Francisco tech scene. He’s still a battery powering the machine. The gag is that in the metaverse of Resurrections, Mr. Anderson could easily have worked on the social media app that spread all those red-pill memes in the first place.
When the original Matrix premiered in 1999, it was the tail-end of Bill Clinton's presidency. The economy was strong and capitalism was the counterculture’s main enemy. By the time the original trilogy wrapped in 2003, 9/11 had happened and George W. Bush was in office. It was harder to see then, but a cultural shift was beginning, one that would change the political landscape forever and ultimately help usher in the era of Donald Trump.
It was during those years that the Wachowskis’ central metaphor was co-opted by forces antithetical to their vision—a shift that gives this new Matrix even higher stakes. It must speak to long-time fans and also answer some of their extrapolations from the source material. In the years since 2003, both of the series' creators have come out as trans women, and Lilly Wachowski has noted that the franchise is an allegory for trans identity. Filmmakers don’t often get much of a say in how their work is adopted and interpreted, but with Matrix Resurrections , Lana Wachowski has a chance—after more than 20 years of fans interpreting them in their own ways—to stipulate how she wants her movies to be viewed. Whether they choose to see her vision is up to them.
📩 The latest on tech, science, and more: Get our newsletters ! Looks that quill: The dark side of Hedgehog Instagram Is the robot-filled future of farming a nightmare or utopia? How to send messages that automatically disappear Deepfakes are now making business pitches It's time to bring back cargo pants 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Senior Editor X Tumblr Topics the matrix Movies Matt Kamen Gregory Barber Jennifer M. Wood Angela Watercutter Jennifer M. Wood Jennifer M. Wood Matt Kamen Megan Farokhmanesh Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
212 | 2,022 | "The US Is Measuring Extreme Heat Wrong | WIRED" | "https://www.wired.com/story/the-us-is-measuring-extreme-heat-wrong" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Gregory Barber Science The US Is Measuring Extreme Heat Wrong Photograph: MICHAEL HANSON/Getty Images Save this story Save Save this story Save In the late 1970s, a physicist and textiles engineer in Texas named Robert Steadman published a paper called “ The Assessment of Sultriness.
” The title reflected an unpleasant sort of steaminess—how temperature and humidity combine to make life hard on the body. To do it, he drew on a long history of experimentation. In the 18th century, people climbed into ovens warmed to 250 degrees Fahrenheit to see how long they could suffer, as they watched steaks cook beside them. In the 19th and early 20th centuries, researchers observed people sweat in Turkish baths and reported from mines where they measured the ambient conditions as workers collapsed from heat exhaustion. Later on, the military picked up more of the testing, deriving equations for how blood flow, sweat, and breathing respond to atmospheric extremes.
What was unique to Steadman was his intimate knowledge of clothes; he was known for projects like a universal sizing system for garments, and motors that could spin fine cotton yarn. After all, he theorized, people are rarely naked in the heat, so our perception of it must be mediated by a combination of physiology and clothing. His formulas assumed precise percentages of how much skin would be covered with fabric, and how specific mixes of air and fiber would transfer heat from the air.
What’s surprising is that, for a set of calculations developed by a textiles researcher, Steadman’s measure of sultriness proved useful for weather forecasters, especially in the United States. In 1990, a scientist at the National Weather Service adapted them with Steadman’s key features more or less intact. Henceforth, the sultriness index came to be known more (or perhaps less) pithily as the “heat index," though it's also sometimes called the “apparent temperature” or “real feel.” If you have been caught in this summer’s heat waves , this is likely a number you have consulted to better understand the torturous outdoors. It’s the measure that’s supposed to include an overlooked factor in the human experience with heat: humidity. That wetness in the air slows the evaporation of sweat off your skin—a key way of staying cool.
What made Steadman’s index successful was that the numbers felt right, in a literal sense. The heat index reads like a temperature, but it’s wobblier than that, a perception rooted in physiological reality. When two different combinations of heat and humidity result in the same heat index—say, 96 degrees Fahrenheit/50 percent humidity and 86 degrees/95 percent humidity, which both have a heat index of 108—this is meant to signal that the body in each scenario is under a similar level of stress as it tries to cool down. As the heat index rises, the miracle of internal thermoregulation that fixes our bodies at 98.6 degrees begins to crumble. Our core temperature rises, which starts off as unpleasant and then gets dangerous.
There’s a roughly 10 degree window before all the chemistry that sustains life begins to fail.
That means death.
But there’s a problem with Steadman’s calculations: They weren’t actually built to handle those sorts of extreme conditions. At a certain threshold—one that includes a plausibly steamy combination of 80 percent humidity and 88 degrees Fahrenheit—the heat index veers into predicting what David Romps, a physicist and climate scientist at the University of California, Berkeley, calls “unphysical conditions” that rarely happen in the lower parts of atmosphere. This includes supersaturated air making contact with the skin—that is, air that’s more than 100 percent saturated with water.
Temperature and humidity conditions beyond that threshold are somewhat rare—and when they do happen, it’s possible to extrapolate from Steadman's model to come up with an estimated heat index value. But estimates are estimates, and those kinds of heat waves are becoming more common as temperatures rise. So Romps and his graduate student, Yi-Chuan Lu, began taking a look at the model’s fundamentals.
They quickly realized that, for the long list of assumptions in the equations, certain things were missing. For one thing, there is a natural solution to the supersaturation problem: When the air is too wet for human sweat to evaporate, it can still bead and drip off the skin, providing some relief.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg With the relevant variables tweaked, the pair noticed a clear pattern: The heat index model was underestimating the severity of the most intense heat waves—drastically, in some cases. When applied to the infamous 1995 Chicago heat wave , the updated model spat out a revised heat index of 154, much higher than the original predicted peak of 135. That underestimate would help explain the severity of the crisis, which killed more than 700 people. An Associated Press article covering a mass burial, the largest in the county’s history, describes impoverished Chicagoans caught unexpectedly in their oven-like homes.
That’s why, even though heat indexes of 135 and 154 both appear abstractly high, no one should get hung up on the idea that “hot is hot,” Romps says. After all, the heat index is supposed to say something about the body and the level of danger. Distinctions at the extremes matter. Indeed, a year after the heat wave, a group of scientists chastised the city for failing to send more urgent warnings to Chicagoans that would have pushed people to seek out cooler shelter and water.
Illustration: NWS Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But it’s here that the usefulness of the heat index again retreats into murkiness. A common way of visualizing the heat index is as a color-coded chart that tracks the growing toll on the body. But some have argued those categories are suspect. In 2020, lawyers for the US Postal Service argued for the invalidation of five citations brought by the Occupational Safety and Health Administration for alleged failures to adequately protect workers from heat. There was not enough scientific evidence, they said, to tie “extreme danger” from heat disorders to a particular range of heat index values. “Where did the legends and color-coding come from? I had to dig that out,” says Arthur Sapper, one of the lawyers who represented USPS. “I found out that they originated from an unlikely place: not a peer-reviewed scientific journal but a popular magazine called Weather Magazine.
” The judge agreed.
That reflects how niche the heat index really is, says Matthew Huber, director of the Institute for a Sustainable Future at Purdue University. Though second nature to US forecast consumers, it’s mostly unknown to the rest of the world. And in scientific circles, it's not held in the highest regard. Much of the research done on physiological responses to heat is instead tied to other scales, such as the wet bulb globe temperature (WBGT), which includes factors like wind variability and solar radiation that the heat index leaves out for simplicity. “It has a very sound physiological basis and a very sound empirical basis,” Huber says.
One reason the heat index has stuck around is that measuring those extra variables involves cumbersome tools that aren’t available at most weather stations. And WBGT is also trickier to read, because the readings don’t map as closely to our understanding of humidity-free temperature. But ideally, we’d learn. “If I had a choice of which metric to use, the heat index would be near the very bottom,” Huber says.
Another issue for the heat index is that it imagines a particular kind of person inhabiting particular conditions: someone who’s healthy and a specific height and weight, and who has easy access to water and shade. And because humidity isn’t measured everywhere, heat index measurements aren’t geographically precise either; in some places, like a vast stretch of Eastern Oregon, there’s only a single weather station that measures it. “We don’t really know what we’re experiencing on a day-to-day basis just under our noses,” says Vivek Shandas, a professor who studies climate adaptation at Portland State University and has been developing strategies for more localized measurements.
That’s one of the reasons why agencies like the National Weather Service have been testing other approaches, such as a “heat risk” system that categorizes heat waves according to additional local factors like the likeliness of power outages and the unusualness of the weather. But scientists like Huber and Romps say they haven’t yet seen information to determine the accuracy of those measurements and how they match up with human physiology. “There’s 10 ways they could do this wrong,” Huber says. (The NWS didn’t reply to questions about how it calculates the heat risk categories.) In the meantime, both Shandas and Huber say it’s good to be figuring out better math for the heat index, especially at the extremes. The world is only getting more sultry, and the heat index is sticky in the American consciousness. That’s especially true in places east of the Rockies, where humidity reigns in the summers—and not just in the Deep South , but the Midwest and mid-Atlantic too. “I can talk trash about the heat index,” Huber says. “But actually, this will make a difference as we start going into this warmer world.” You Might Also Like … 📧 Find the best bargains on quality gear with our Deals newsletter “ Someone is using photos of me to talk to men” First-gen social media users have nowhere to go The truth behind the biggest (and dumbest) battery myths We asked a Savile Row tailor to test all the “best” T-shirts you see in social media ads My kid wants to be an influencer.
Is that bad? 🌞 See if you take a shine to our picks for the best sunglasses and sun protection Staff Writer X Topics climate change environment temperature weather extreme weather climate extreme heat Maryn McKenna Arbab Ali Maryn McKenna Matt Simon Maryn McKenna Matt Simon Emily Mullin Celia Ford Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
213 | 2,023 | "Everything We Know About Neuralink's Brain Implant Trial | WIRED" | "https://www.wired.com/story/everything-we-know-about-neuralinks-brain-implant-trial" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Emily Mullin Science Everything We Know About Neuralink’s Brain Implant Trial Photograph: Nathan Laine/Bloomberg/Getty Images Save this story Save Save this story Save Elon Musk’s brain implant company Neuralink has announced it is one step closer to putting brain implants in people.
Today, the company stated that it will begin recruiting patients with paralysis to test its experimental brain implant and that it has received approval from a hospital institutional review board. Such boards are independent committees assembled to monitor biomedical research involving human subjects and flag any concerns to investigators. Neuralink is dubbing this “the Prime Study,” for Precise Robotically Implanted Brain-Computer Interface.
Neuralink did not specify where the trial will take place, and company representatives did not immediately respond to WIRED’s emailed request for an interview.
Neuralink is one of a handful of companies developing a brain-computer interface, or BCI, a system that collects brain signals, analyzes them, and translates them into commands to control an external device. In May, the company said on X, formerly Twitter, that it had received approval from the US Food and Drug Administration to conduct its first in-human clinical study , but it didn’t provide further details at the time.
In a post on its website today, Neuralink states that the initial goal of its BCI will be to “grant people the ability to control a computer cursor or keyboard using their thoughts alone.” The clinical trial will test the safety of the company’s implant and surgical robot and assess the BCI’s functionality.
Neuralink has created a patient registry for people who are interested in learning whether they may qualify for the study. In a brochure on its website, Neuralink says it is looking for participants who have quadriplegia, or paralysis in all four limbs, due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS) , and are at least 22 years old. For those chosen to participate, the study will involve a combination of nine at-home and in-person clinic visits over 18 months. Neuralink anticipates the study will take six years to complete.
Neuralink’s coin-sized implant is not visible when implanted, according to the company. It records neural activity using 1,024 electrodes, distributed across 64 threads, each thinner than a human hair.
During the study, the robot will surgically place the implant into a part of the brain that controls movement intention. Once in place, the implant is designed to record and transmit brain signals wirelessly to an app that decodes movement intention.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The company has not revealed the exact region of the brain its device will be embedded in, which hospital has given the institutional review board approval, nor how many participants it will ultimately enroll in the study.
At a Neuralink “show and tell” last November , Musk spoke about two possible use cases for the implant: to help people with paralysis control tech devices and to restore vision. But there was no mention of a vision prosthetic in today’s release.
Neuralink is one of a handful of companies racing to bring a BCI to market. Although such devices have been used experimentally since the 1960s, none is available commercially. Other research efforts have allowed paralyzed people to control computers and prosthetic limbs with their thoughts, or to use a computer to speak, mostly in lab settings.
Synchron, one of Neuralink’s competitors , has shown that its implant can be used at home to allow paralyzed patients to do online banking, shopping, and emailing. The company’s implant resembles a flexible mesh stent and is threaded up through the jugular vein to sit against the brain, rather than inserted into the brain directly.
Two former Neuralink employees have started their own BCI ventures. Past Neuralink president Max Hodax established Science Corp.
in 2021 to develop a prosthesis to provide artificial vision to blind people. And Benjamin Rapoport, an original member of Musk’s team, founded Precision Neuroscience in 2020. Earlier this year, the company temporarily placed its implant on the brains of three patients to test the device’s ability to read and record electrical activity.
Jacob Robinson, a professor of electrical and computer engineering at Rice University, says 18 months is longer than some previous clinical trials of brain implants. (The devices are typically taken out once a study ends.) “I think this is good news for people who will be looking to benefit from this procedure,” says Robinson, who is also CEO and co-founder of Motif Neurotech , which is developing a brain implant for treating depression. “Ideally these technologies will be functional for several years, but this is a great step in the right direction.” Update 9-19-2023 6:25 pm ET: This story was updated to add comments from Jacob Robinson.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Staff Writer X Topics biotech brain brains Brains and Behavior Neuroscience medicine health Elon Musk Emily Mullin Matt Simon Matt Simon Rhett Allain Ramin Skibba Rhett Allain Emily Mullin Emily Mullin Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
214 | 2,021 | "The (Very Slow) Race to Move Forests in Time to Save Them | WIRED" | "https://www.wired.com/story/the-very-slow-race-to-move-forests-in-time-to-save-them" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Lauren Markham Science The (Very Slow) Race to Move Forests in Time to Save Them Photograph: Tyler Hulett/Getty Images Save this story Save Save this story Save This story originally appeared on Mother Jones and is part of the Climate Desk collaboration.
I drove to Oregon because I wanted to see the future. Our rapidly changing climate vexes me, keeps me up at night—perhaps you’ve felt this too—and recently I’d become particularly preoccupied with trees. In California, where I live, climate change helped kill nearly 62 million trees in 2016 alone, and last year, 4.2 million acres of our state burned. I wanted to know what was in store for our forests and, because we humans rely on them for so much—for clean air, for carbon sequestration, for biodiversity, for habitat, for lumber and money, for joy—what was in store for us.
I’d read about a group of scientists who were not only studying the calamities befalling our forests but also working to help the trees migrate in advance of coming doom. So in May, I headed to a 3½-acre stand of roughly 1,000 Douglas firs at a US Forest Service nursery outside of Medford. The grove was situated in a wide valley in the southwestern corner of the state, nestled between the Cascades to the east and the Coast Range to the west. Brad St. Clair, a Forest Service scientist who has studied the genetic adaptation of trees for more than two decades, met me by the road. He’s short and rugged, as if built for adventuring and tending to the lives of trees, and he arrived in a souped-up Sprinter van loaded with an armory of outdoor gear. In 2009, he and his team planted this and eight other stands of firs after they’d gathered seeds from 60 tree populations all over Washington, Oregon, and California and grown them into seedlings in a greenhouse. The seeds were sourced from as high as 5,400 feet in the Sierras and as low as the coast, from Mendocino County, California, all the way north to Central Washington, and were planted in intermixed clusters at each of the nine sites to see how they would fare in a hotter, drier climate than the ones they’d come from. In other words, to see if they’d make it in the future.
Douglas fir, a tall, narrow-trunked evergreen often dragged indoors for Christmas, is a favorite of foresters and logging companies because of its combination of strength, fast growth, and pliability. It can also withstand a change in climate of about 4 degrees Fahrenheit without much trouble. But global average temperatures have already risen by almost 3 degrees since the 1900s, and all models predict average temperatures to blow through the 4-degree threshold in the next several decades, perhaps rising above 7 degrees by the end of the century.
In the wide, flat expanse of the nursery, the firs were rimmed by fallow land on all sides. St. Clair instructed me to put on safety glasses, and then he ducked down, pushed aside the outermost branches, and slipped into the trees. I followed him. Within two steps, there we were in a veritable, dense forest, as if an enchanted wardrobe had been pulled open to reveal a world transformed. On the periphery it had been hot, but here, as we moved through the dapple, it was cool and fragrant with pine.
A sign mounted on a PVC pipe marked the provenance of the cluster of trees we stood beneath. They came, St. Clair explained, from the Oregon Siskiyou, a dry zone at only slightly higher elevation than where we were today. This is why they were doing so well: Their native climate wasn’t so different from Medford’s. As we moved on, the trees, while still lush and full, grew shorter. Because this next batch was from up in the Cascades, he pointed out, at an elevation far higher than where we stood, the trees were somewhat stunted in this new habitat and couldn’t grow as tall. We kept walking, and after a while the trees grew taller again, looming three times my height before breaking into sky. These trees also came from climates that were dry like Medford, and so found here a happy home—at least for now.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg We ducked and trudged through the lower thickets of the healthy trees until we suddenly emerged from the woods onto what I can only describe as an arboreal apocalypse—an open tangle of dead branches, brown and brittle, like an upright graveyard. These ill-fated trees, St. Clair said, had come from the Oregon coast, where it is far wetter. While they’d done OK in the first three years of the study, they just couldn’t make it in the long term. “As the climate warms,” St. Clair said, looking around and pointing up to a dead fir with his walking stick, “you’re going to see more of this.” The future of forests is a grim one—too grim for some of us to bear. By 2030, 75 percent of redwoods will disappear from some of their coastal California habitats. In some climate scenarios, almost none of the namesake species in Joshua Tree National Park will exist. Sea level change is creating ghost forests all along the Eastern Seaboard—already, less than a third of New Jersey’s Atlantic white cedar habitat remains.
Like humans, forests have always migrated for their survival, with new trees growing in more hospitable directions and older trees dying where they are no longer best suited to live. The problem now is that they simply can’t move fast enough. The average forest migrates at a rate of roughly 1,640 feet each year, but to outrun climate change, it must move approximately 9,800 to 16,000 feet —up to 10 times as fast. And in most habitats, the impact of highways, suburban sprawl, and megafarms prevents forests from expanding much at all. Forests simply cannot escape climate change by themselves.
Back in 1992, forest geneticists F. Thomas Ledig and J. H. Kitzmiller coined the term “assisted species migration” in a seminal study in the journal Forest Ecology and Management.
Since then, hundreds of biologists and geneticists like St. Clair have been studying how best to move forests in advance of their looming destruction. To do so requires a complex set of mapping and experiments—understanding, for instance, what climate trees are best suited to grow in, what region will most closely resemble that same climate in, say, 50 years, and what adaptations best ensure that a tree will take root and flourish, build symbiosis with the soil fungi, and not end up a mere matchstick awaiting the next megafire.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg St. Clair is something of an assisted migration evangelist, a firm believer that we need to move tree populations, and fast, if we want to keep apace. But due to bureaucratic logjams and a fervent commitment to planting native species, there’s very little assisted migration in the United States—unlike in Canada, where the practice has been adopted with more urgency in recent years. St. Clair and other Forest Service scientists are working to transform assisted migration from a mere research subject to a standard management strategy in our vast, imperiled public lands.
We finished our walk through St. Clair’s baby forest, making our way back to the cars along its outer edges. “The future is terrifying,” I told him. He understood what I meant, he said.
During the talks he gives about his research, he likes to show an image from Lewis Carroll’s Through the Looking-Glass , in which the Red Queen charges forward with her crown and sturdy scepter, pulling frenzied Alice along in her wake. He had the slide printed out and handed it to me as we walked. “Now, here, you see,” the Red Queen says to Alice, “it takes all the running you can do, to keep in the same place.” “So that’s what we gotta do,” he told me, pointing to the Red Queen. “We gotta run.” While assisted migration is a relatively new concept, the movement of forests is as old as trees themselves. Since they first evolved, trees have been shifting north and south, east and west, up and down in elevation as the climate has changed. Forests outran the frost as ice ages set in, and as the ice began melting, they darted back the other way, traversing mountain ranges and unfurling themselves across continents—moving, sentiently, toward climatic conditions that suited their ability to grow and produce the trees of the future.
Of course, while forests move, individual trees can’t. “They are stuck where they are,” says Jessica Wright, a senior Forest Service scientist based in Davis, California, who studies conservation genetics. Trees must try to survive whatever environment they land in. And yet, Peter Wohlleben writes in The Hidden Life of Trees , while every tree has to stay put, “it can reproduce, and in that brief moment when the tree embryos are still packed into seeds, they are free.” The seed sets forth, as Zach St. George chronicles in The Journeys of Trees , carried by the wind or in the belly of a blue jay or stuffed in the cheek of a squirrel, toward its destiny. If it is among the luckiest, it will find a hospitable home and carry the forest forward. Because seeds will only take root in areas suited to their growth, forests tend to move in the direction of their future survival.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Unlike humans, most trees are long-life species, ranging from the yellow birch, which lives roughly 150 years, to the bristlecone pine, the oldest known of which is nearly 5,000 years old. Forests are the trees’ complex civilization, functioning not unlike human cities: a community of beings that talk to one another and organize and defend themselves and create offspring and bid farewell to their dead. In this way and many others, recent research has revealed, trees are spellbinding, rife for anthropomorphism. They tend to live in interdependent networks, like families, where, with the help of symbiotic fungi, scientists like Suzanne Simard have discovered, they care for their sick, feed one another, and, like a mutual aid society, share resources with those in need. Trees of the same species—and sometimes even those across species—tend to respect one another’s personal space, shifting their growth patterns so that everyone gets enough sunlight. Trees are also adept community organizers who know how to band together to crowd out competitor trees and guard against other threats. When a pest comes, trees can issue chemical warnings to one another so they can launch their defenses. Trees can also register pain. Scientists have found that their root networks, which work with the underworld organisms of fungal mycelia, seem to hold intergenerational knowledge, like a collective brain. Read enough about the mesmerizing science of trees and one begins to feel certain that, if humans behaved like a healthy forest, we’d be far better off—and that we wouldn’t be in our current climate mess in the first place.
Left to their own devices, forests migrate on a near-geologic scale. But people have been moving trees for our own purposes for thousands of years. We’ve done this in small doses, such as planting trees in city gardens or backyards for shade and aesthetic delight, or planting a wall of cypress along a tract of farmland to block the wind. We’ve also moved trees on a far more substantial scale, with a range of outcomes. While apple trees originated in Central Asia, early settlers brought seeds to the Americas and infamously scattered them throughout what is now the United States, where apple pie is now both a signature dessert and a cultural symbol.
Such interventions haven’t always panned out so well: In 1895, the emperor of Ethiopia ordered the planting of fast-growing eucalyptus trees imported from Australia so people would have abundant firewood. But the thirsty eucalyptus crowded out existing trees, and parched once-fertile farmlands. (Eucalyptus trees are also invasive transplants in California, though they have also become critical nesting habitat for the threatened monarch butterfly—the web of interconnectivity is a tangled one.) And in 1904, US foresters began planting Japanese chestnuts to cultivate for wood, bringing chestnut blight to their North American cousins, which were ill-equipped to fight the fungus; by 1940, most adult chestnuts were gone. The movement of trees, scientists caution, must be done with extreme care—and based on history, many are hesitant to do it for fear of throwing off the delicate balance of an existing landscape.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Proponents of assisted migration claim that this balance has already been upended by climate change. They also stress that assisted migration is an umbrella term for a range of activities, some way more far-reaching than others. The most drastic intervention is known as assisted species migration , which transplants species of trees from places where they naturally occur to faraway places where they do not. Then there’s assisted range expansion , which plants trees slightly outside their naturally occurring territory. The strategy involving the least human intervention is known as assisted population migration , which, like St. Clair’s studies of Douglas fir, plants trees of a single species with certain adaptations to a new location where other members of that same species already live. Most scientists advocate the latter two strategies and consider the first one too extreme.
So how to safely move a population to a new habitat—and to know how far to do it, and how fast? “If I knew the answer to that,” Forest Service scientist Kas Dumroese told me, “I’d have the Nobel Prize.” To find out which plants are best suited to which environments, scientists tend to use something called the Common Garden Study, which, like the artificial forest I visited in Oregon, plants flora from a wide range of locations—and thus adapted to a range of conditions—on a single plot to study their response and growth patterns. What scientists have found in most assisted migration garden studies is that the trees that do best are those whose parents and ancestors thrived in similar terrain.
If you move a population of trees adapted to a particular climate too slowly, it’s bound to succumb to the hotter, drier conditions brought on by climate change. But move it too fast to a colder, wetter climate, and the trees might fall victim to too much frost, or to root rot in damp conditions that make them vulnerable to pests. Shifting trees that can handle midcentury climate projections—so new forests are adapted to the temperatures of roughly 2040 to 2070—seems to be the Goldilocks balance that will ensure a population’s survival.
But there are other important considerations, including the symbiotic relationship between soil fungi and trees. Simard, the author of the recent best-selling book Finding the Mother Tree , explains that, while trees will likely find some symbiotic mycelium as long as they are moved within their species’ existing range, that mycelium might not be the best adapted for their needs. Trees can’t be seen as growing in isolation, but need to be considered in terms of the overall health and relationships of a larger ecosystem. “There’s a lot we don’t know,” she told me. Assisted migration “is risky, but, you know, we also have no choice. We have to start experimenting with this. We have to start moving things around and watching and seeing how they do.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The Forest Service scientists who study assisted migration couldn’t agree more, and they hope that the agency’s forest managers will start using this strategy in actual forests. Despite decades of research, the Forest Service has rarely put assisted migration into practice, in part due to some foresters’ and scientists’ resistance to moving trees outside their agreed-upon range. In the 1930s, the Forest Service created the idea of seed zones—mapping the landscape into areas “within which plant materials can be transferred with little risk of being poorly adapted to their new location,” as the agency states on its website.
Ever since, forest managers have stayed loyal to these zones when selecting seeds for planting.
While assisted migration isn’t strictly prohibited by the Forest Service Manual and its accompanying handbooks—the official policy documents that, as Forest Service land manager Andy Bower explains, guide “every aspect” of how the agency operates—it isn’t encouraged, either. Last fall, Bower, St. Clair, and five other forest geneticists in the Forest Service proposed changes to the manual that include assisted population migration and, in some cases, slight range expansion, as forestry strategies. If their recommendations are accepted, it could drastically accelerate the use of assisted migration nationwide.
The Forest Service doesn’t have to look far for an example of a country taking a more aggressive tack: Canada is substantially ahead of the United States in research and implementation of assisted migration. This is, in part, a result of urgency. In the early aughts, aided by worsening climate change, lodgepole pine forests were devastated by invasive bark beetles and massive wildfires. This was also true in the United States, but when it happened in Canada, the country acted far more aggressively. “It was huge,” Greg O’Neil, a scientist working for the Canadian Forest Service, told me, “like they got hit by a sledgehammer. It really woke up the forestry community.” The Forest Service of British Columbia launched the Assisted Migration Adaptation Trial , or AMAT, in 2009, planting roughly 153,000 trees to see how each would fare in different climates. With more than a decade of results, they have begun to use this data to reforest areas that have been logged or burned.
This is not to say that the method should become the land management strategy in all or even most scenarios. Moving species across a landscape in response to climate change, Dumroese says, should be undertaken according to the Hippocratic oath. “We’re talking about making decisions that have implications we may not understand, that may not even be recognized for a hundred years,” he said, “or even longer.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg One of the troubles with assisted migration is that it’s difficult to know what future climate to plan for. Human choices are hard to predict. The adoption of a Green New Deal, for instance, would significantly affect climate modeling, as would the reelection of Donald Trump in 2024 or the continued reign of Amazon-destroying Jair Bolsonaro in Brazil.
But even in the most optimistic of climate scenarios, the forests need to get moving, from south to north, from lowlands to highlands, so that our landscapes remain populated with trees.
“It’s almost like we have this temporal-centric view of nature,” O’Neil said. “A lot of people view climate change as something that’s going to happen, not something that has already happened.” And though all trees can generally survive a change of 4 degrees Fahrenheit in either direction, O’Neil reminds me that 2.7 degrees—the amount that the climate has already warmed in the past century—is a cataclysmic change of circumstances from a tree’s perspective. Seen this way, he said, “these trees are already a long way from home.” If all we do is help them get back to the kinds of habitats they’d lived in before the climate began to change so rapidly, he added, “I think we’ll be doing a great service.” In May, a few weeks before driving to Oregon, I accompanied Forest Service scientist Jessica Wright from her research station in the Sierra Nevada foothills up Route 50 and into the mountains of the Eldorado National Forest, one of the most ecologically diverse tracts of land in California, spanning nearly 1 million acres. The road wound us upward into the rolling expanse of the Sierras, where towering green pines spread in all directions. Such sights always reminded me of the state’s largesse, and I used to find them transcendental: the sanctity of open space, the vastness of the landscape a mirror for the vastness of the human spirit. But now, this feeling is accompanied by a twin coil of fear.
Fire.
Those trees are exquisite fuel, and it all feels doomed to burn.
We turned onto a dirt road and knocked our way through the forest. After a few minutes, the trees thinned; the lowest branches of ponderosa pines and Douglas firs were charred, and the blackened sticks of former trees pointed skyward like bayonets. The road took us to an open clearing, bare and treeless like a wound. This was the site of the King Fire , which destroyed roughly 250 square miles of the central Sierra foothills in 2014, and it was only now, seven years later, looking green again.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A few years back, Wright started talking to a Forest Service program manager named Dana Walsh about the prospect of an assisted migration research trial on a tract of land that Walsh oversaw—and they decided to plant along this 12-acre patch that had burned. In the winter of 2019, they sowed their 1,200 trees sourced from 24 origin populations. Their hope is to convince other forest managers that assisted migration can be used to replant burned forests in the future—instead of reforesting strictly with local seeds. And several Forest Service scientists, including Wright and St. Clair, are building new seed selection databases that map climate predictions with seed source adaptations, should assisted migration finally be put into practice in the States.
Wright, who has hip-length hair and seems equally at home sporting a hard hat and presenting at a conference, is particularly optimistic about the prospects of planting in burn zones. If a forest will be replanted anyway, why plant what was already there and burned, when we can reforest these burn sites—which have grown all the more common, and so much bigger—with trees that will be better suited to that future in 30 to 50 years? A stressed forest brings diseases and pests, which kill trees, offering more kindling to burn. The healthier a forest, the less likely it is to catch fire.
Along 12 acres of the King Fire site, Wright and her team had planted two kinds of pine: ponderosa—which grow up to 200 feet tall with thick, striated bark—and a type of sugar pine resistant to white pine blister rust, a fungus decimating western sugar pines. To mimic nature, the trees had been planted somewhat willy-nilly along the hillside, as they would grow in the wild. We walked along the planting site, where I tried to spot the trees; at only 2 years old, the saplings were not much higher than my ankle. Some hadn’t made it at all, and some were still slight wisps of life, while others were growing strong and burly.
I asked Wright what she made of the differences in growth. She laughed.
“It’s too early to say,” Wright told me.
But weren’t they impatient, I wanted to know? I was. Why was this tree, on the lower slope, doing so beautifully, its tiny trunk much thicker than the rest, its needles skewering outward like porcupine quills, its yellow-green buds promising new growth? Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Wright countered that it’s not until about 10 years into a study that the data starts to be meaningful. “That’s when I start to believe it,” she said. So many things could happen between now and then, and early growth might not end up meaning much. After all, those dead Douglas firs that had so rattled me in Oregon had done great the first few years of the study.
We found some shade under the trees that had survived the 2014 fire, and sat down for lunch. To consider the future of forests is to slip into a timeline so abstract that it’s hard to conceive, but scientists like Wright are in it for the long haul, imagining a life span far beyond their own.
“I won’t see this big tall forest we’re planting now,” she said. Her kid might see it, or perhaps her grandkid. Tending to any kind of future is a gesture of optimism, she concedes, particularly such a distant one. “But I’m good with that.” As a member of the living, it can be difficult to understand how unlikely it is, statistically speaking, to become alive. A healthy beech tree, explains Wohlleben in The Hidden Life of Trees , will produce roughly 1.8 million beechnuts in its lifetime. “From these, exactly one will develop into a full-grown tree,” he writes, “and in forest terms, that is a high rate of success, similar to winning the lottery.” For Joshua trees, the odds of successful reproduction are even longer. For a Joshua tree to be born—a tree that lives in far starker conditions than the beech—its mother has to flower and seed when it reaches sexual maturity. The seed, which resembles a flat puck of black putty smaller than a dime, has to find a home conducive to its germination and bloom. That’s hard enough in the dry expanse of the desert, and harder still as the landscape warms. Its best-case scenario is to find its way to a spot beneath a nurse shrub or blackbrush, where it can germinate protected from the chomp of roving jackrabbits. It would particularly benefit from finding a spot atop a symbiotic soil fungus that lurks beneath the sandy loam and can help the baby Joshua tree grow. If the tree makes it past the perils of early life, it needs another 30 to 60 years before it’s ready to reproduce. Then it would rely on the yucca moth to pollinate it; otherwise, it won’t bear fruit. Then and only then, after this confounding and unlikely gauntlet has been run, will a Joshua tree be able to set seed, the whole tenuous cycle repeating itself.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Scientists have mapped Joshua tree survival against the most dire climatic conditions—i.e., if humans continue at our current rate of consumption and emission—and found that by the year 2100, essentially zero Joshua tree habitat will remain in California’s Joshua Tree National Park, even for trees that are already among the most drought-tolerant.
Lynn Sweet, a plant ecologist who studies Joshua trees at the University of California, Riverside, told me that her team calculated that, under more mitigated scenarios in which carbon emissions were reduced, “we could preserve up to 20 percent or so of habitat in the park and the surroundings,” assuming the moth and mycelium make it in this scenario, too.
When it comes to conservation efforts, humans typically think of the forests most dear to them—the places they grew up visiting, the places where they got married, or where they take weekend hikes, the national parks known for their iconic trees. These places—Sequoia National Park, Olympic, Muir Woods, the Everglades—loom large in our collective consciousness. “I often joke with reporters,” Sweet told me, “that no one is coming out to do a climate change article on the blackbrush bush,” an equally imperiled species in the desert.
Joshua Tree National Park is central on my personal map of sacred places. It was the first place I went backpacking as a kid, the first place I slept under the stars, and a place I’ve returned to again and again to reattune with the world. The Joshua tree’s silhouette is imprinted on many significant memories throughout my life—these are trees I really, really, really want to survive.
After getting vaccinated last spring, I headed down for a few days in search of desert light and those fabled trees. I drove from the south end of Joshua Tree to the north, moving through a low, flat valley where Joshua trees and cholla clustered in mighty, baffling stands. The Joshua trees here in the valley looked healthy enough, but botanists know better: Look closely, they told me, and you’ll see there are no young sprouting among the noble elders. This was a forest of childless parents, living their final days as the last of their kind to call that spot home.
Sweet had directed me to visit Black Rock Canyon, where the healthiest of Joshua trees were now finding space to grow. Here we were at higher elevation than the park’s sweeping flatlands, meaning it was cooler and slightly wetter. “They’re essentially running uphill,” she told me, on an intergenerational march toward higher ground. I took a long solo hike through these highlands where hundreds of Joshuas stood. The trees were lovely to behold from all angles, like benevolent apparitions from some absurdist underworld. But the best view was from above: beholding all those Joshua trees across the valley floor that were thriving, surrounded by their young, with room still to move upward. The problem with up is there’s only so far to go before it’s just sky.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The living will do whatever they need to survive. In the apocalyptic grove near Medford, I had seen one dessicated former tree whose branches were covered in hundreds of cones still affixed to it like Christmas ornaments. St. Clair explained that this behavior was normal enough for a tree in distress. Sensing it will die, the tree bursts forth into cones in a frantic final act of hope: not so much for itself, but for its species.
I left the desert, like I’d left Oregon, having seen what I’d come to see: the future. There wasn’t a single version of it, but many. Another quote St. Clair likes to share is by the late forester and politician Gifford Pinchot: “The vast possibilities of our great future will become realities only if we make ourselves responsible for that future.” If we look into the crystal ball, we see ourselves peering back at us in search of answers to the same questions.
📩 The latest on tech, science, and more: Get our newsletters ! Is Becky Chambers the ultimate hope for science fiction? Valley fever is spreading through the western US How a Google geofence warrant helped catch DC rioters Why robots can't sew your t-shirt Amazon's Astro is a robot without a cause 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones Topics Climate Desk climate change Ecology Biology trees conservation environment nature Biodiversity Matt Simon Jim Robbins Matt Simon Kate Yoder Matt Simon Arbab Ali Emily Mullin Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
215 | 2,022 | "How Do You Know a Cargo Ship Is Polluting? It Makes Clouds | WIRED" | "https://www.wired.com/story/how-do-you-know-a-cargo-ship-is-polluting-it-makes-clouds" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Matt Simon Science How Do You Know a Cargo Ship Is Polluting? It Makes Clouds Photograph: Kiyoshi Ota/Bloomberg/Getty Images Save this story Save Save this story Save End User Research Source Data Images Technology Machine learning Neural Network If you have a habit of perusing satellite imagery of the world’s oceans—and who doesn’t, really?—you might get lucky and spot long, thin clouds, like white slashes across the sea. In some regions, like off the West Coast of the United States, the slashes might crisscross, creating huge hash marks. That’s a peculiar phenomenon known as a ship track.
As cargo ships chug along, flinging sulfur into the atmosphere, they actually trace their routes for satellites to see. That’s because those pollutants rise into low-level clouds and plump them up by acting as nuclei that attract water vapor, which also brightens the clouds.
Counterintuitively, these pollution-derived tracks actually have a cooling effect on the climate, since brighter clouds bounce more of the sun’s energy back into space.
The Pacific Ocean off of California is particularly hash-marked because there’s a lot of shipping along that coast, and ideal atmospheric conditions for the tracks to form. Well, at least it used to be. In 2020, an International Maritime Organization (IMO) regulation took effect, which severely limited the amount of sulfur ships are allowed to spew. Shipping companies switched to low-sulfur fuel, which improved air quality, especially around busy ports. But in doing so, they reduced the number of ship tracks—which means fewer brightened clouds, and thus more warming.
In the map at right, you can see ship tracks highlighted in purple.
Illustration: Yuan et al.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Writing Friday in the journal Science Advances , researchers described how they used a new machine-learning technique to quantify the clouds better than ever before, showing how the sulfur regulation cut the amount of ship tracks over major shipping lanes in half. That, in turn, has had a moderate warming effect on those regions.
“The big finding is the regulation in 2020, put forward by the IMO, has reduced the global ship-track numbers to the lowest point on the record,” says Tianle Yuan, a climate scientist at NASA and the University of Maryland, Baltimore County, who led the research. (Yes, reduced economic activity during the pandemic lockdowns may have had a small influence too. But ship-track activity has remained low even as cargo traffic has picked back up.) “We’ve had similar but smaller-scale, strict regulations before, and we can also see that impact,” he continues. “But there, the effect is not global.” In Europe and North America, for instance, officials had already sectioned off what are known as emission control areas, or ECAs, which established local versions of the standards set by the 2020 global rule. “The number of tracks within the ECAs, within the control zones, reduced dramatically, to the point of almost disappearing,” Yuan says. “But outside of it, actually we saw some increase because the shipping routes had shifted.” The satellite imagery caught ships doing something sneaky. Outside of control zones, where the vessels weren’t bound by sulfur regulations, they burned regular old fuel. Then once inside an ECA, their operators could switch to low-sulfur fuel, coming in line with the pollution rules. (Sulfur is a normal component of a fossil fuel, and it takes extra processing to remove it. Because low-sulfur fuel is more expensive, it’s more cost-effective for ship operators to spend as much time outside of ECAs as possible, burning the old stuff.) “Our technique can help to validate whether a ship is using clean fuel or not,” says Yuan, “because we can observe indirectly how much pollution they're putting into the air.” Illustration: Yuan et al.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg To do all this, Yuan and his colleagues first gathered satellite data, which humans manually combed for ship tracks. They then fed this data into deep learning models, training the algorithms to recognize ship tracks on their own. It’s the same idea behind training an algorithm to recognize cats: If you show it enough pictures, it’ll get the general idea of what a cat looks like. So even though no two ship tracks look the same, the models could generalize well enough to identify them around the world. (You can see the ship tracks as the model saw them in the image above.) The researchers could then feed the models more NASA satellite data, covering all the world’s oceans, so the algorithms could identify the ship tracks and how their numbers changed over the years.
Illustration: Yuan et al.
As you can see in the image above, there are a number of ship-track hot spots around the world, represented in a gradient that runs from red (high) to white (medium) to blue (low). As the red smudge in the upper left shows, the Pacific Ocean near Southern California and Mexico is particularly prone to ship tracks. Whether clouds form in a given area depends on a number of factors, like the stability and moisture content of the atmosphere, how polluted the air may already be, and the amount of ship traffic.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The dark green lines around Asia, on the other hand, are estimates of emitted sulfur dioxide—not visible ship tracks—showing busy shipping lanes. These vessels don’t produce ship tracks like they do off western North America, mainly because the air is already polluted—that is, there are already lots of particulates getting into clouds, so the extra sulfur from ships doesn’t do much.
Notice the white blob down between 60 and 0 degrees W on the map, halfway between the tips of South America and Africa. That’s near Antarctica, where hardly any ships venture. “It turns out it's a volcano,” says Yuan. “That provided us kind of an independent check, because there you don't expect any shipping activity at all, yet it's a hot spot.” That’s because volcanoes also spew sulfur aerosols, which seed clouds, brightening them in the same way ship tracks do.
Getting a better handle on the prevalence of ship tracks has a two-fold utility. For one, the clouds betray a ship’s emissions: A captain might lie to regulators about what kind of fuel they’re burning, but the sky above won’t. “If we can measure individual ship tracks, and we can attach that ship track to an individual ship, then we can know if a ship is emitting a lot of pollution,” says Yuan. “Then we know that probably it's not burning clean fuel.” And two, pollution plays a large—and largely understudied—role in climate change. Ship emissions are terrible for the environment because they destroy air quality, but ricocheting some of the sun’s energy back into space is actually a benefit. Interestingly, this is also the idea behind stratospheric aerosol injection , a proposed form of geoengineering in which planes would spray sulfur to deflect sunlight. Researchers are also playing with cloud brightening techniques , in which they’d spray sea salt to brighten low-lying clouds, just like ship pollution does.
But not all kinds of pollution deflect solar energy, as sulfur does; some trap it. Other forms, like microplastics, have loaded the atmosphere with particulates that may have both cooling and heating effects on the planet. Plane contrails seem to largely play a warming role (although one that can be ameliorated by flying at certain altitudes). And both carbon dioxide and methane basically serve as insulating blankets, warming the planet.
These pollutants are often intermingled, so cutting one can have a complex effect. This is a paradox of climate action : By reducing air pollution, including the aerosols that deflect solar energy, one recent study estimates that humanity may be boosting warming from carbon dioxide by 15 to 50 percent.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In fact, the influence of aerosols remains one of the most uncertain areas in climate science, says Hailong Wang, who studies these dynamics at the Pacific Northwest National Laboratory. “Many, many models are still struggling to get the accurate representation of those effects in order to predict future climate change,” says Wang, who wasn’t involved in the new ship-tracks paper. “At some point, if we significantly reduce those aerosol emissions, we do expect some side effects of additional warming.” Modeling how that’ll play out, though, is difficult, in part because air pollution isn’t homogeneous around the world—it varies significantly by region, and it can change rapidly due to weather patterns, and on longer timescales due to air-quality regulations. But even though this study looked just at ship tracks, researchers can use the new data to validate climate models, Wang says—for instance, to see if they can accurately represent what happens when local aerosol pollution suddenly plummets.
Ships switching to low-sulfur fuel isn’t going to create a huge, planet-wide drop in emissions, because it's still a fossil fuel that burns carbon. (And don’t get it wrong—the bottom line is we absolutely have to stop burning fossil fuels for the good of the climate at large.) But it offers a little preview of what a reduction in one specific type of pollution might do for warming—and how complicated solving this puzzle will be.
In the meantime, as the 2020 regulation works its magic, ship tracks will continue to fade around the world. If you get lucky and spot one in new satellite imagery, you may have found yourself an outlaw.
You Might Also Like … 📩 Get the long view on tech with Steven Levy's Plaintext newsletter Watch this guy work, and you’ll finally understand the TikTok era How Telegram became a terrifying weapon in the Israel-Hamas War Inside Elon Musk’s first election crisis —a day after he “freed” the bird The ultra-efficient farm of the future is in the sky The best pickleball paddles for beginners and pros 🌲 Our Gear team has branched out with a new guide to the best sleeping pads and fresh picks for the best coolers and binoculars Staff Writer X Topics climate change pollution atmosphere ships Matt Simon Matt Simon Matt Simon Ramin Skibba Amit Katwala Grace Browne Ramin Skibba Jim Robbins Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
216 | 2,019 | "As the World Warms, Clouds Could Disappear—Catastrophically | WIRED" | "https://www.wired.com/story/as-the-world-warms-clouds-could-disappear-catastrophically" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Natalie Wolchover Science As the World Warms, Clouds Could Disappear—Catastrophically Play/Pause Button Pause A state-of-the-art supercomputer simulation indicates that a feedback loop between global warming and cloud loss can push Earth’s climate past a disastrous tipping point in as little as a century.
Michelle Yun/Quanta Magazine; iStock Save this story Save Save this story Save In a 1987 voyage to the Antarctic , the paleoceanographer James Kennett and his crew dropped anchor in the Weddell Sea, drilled into the seabed, and extracted a vertical cylinder of sediment. In an inch-thick layer of plankton fossils and other detritus buried more than 500 feet deep, they found a disturbing clue about the planet’s past that could spell disaster for the future.
Lower in the sediment core, fossils abounded from 60 plankton species. But in that thin cross-section from about 56 million years ago, the number of species dropped to 17. And the planktons’ oxygen and carbon isotope compositions had dramatically changed. Kennett and his student Lowell Stott deduced from the anomalous isotopes that carbon dioxide had flooded the air, causing the ocean to rapidly acidify and heat up, in a process similar to what we are seeing today.
While those 17 kinds of plankton were sinking through the warming waters and settling on the Antarctic seabed, a tapir-like creature died in what is now Wyoming, depositing a tooth in a bright-red layer of sedimentary rock coursing through the badlands of the Bighorn Basin. In 1992, the finder of the tooth fossil, Phil Gingerich , and collaborators Jim Zachos and Paul Koch reported the same isotope anomalies in its enamel that Kennett and Stott had presented in their ocean findings a year earlier. The prehistoric mammal had also been breathing CO 2 -flooded air.
More data points surfaced in China, then Europe, then all over. A picture emerged of a brief, cataclysmic hot spell 56 million years ago, now known as the Paleocene-Eocene Thermal Maximum (PETM). After heat-trapping carbon leaked into the sky from an unknown source, the planet, which was already several degrees Celsius hotter than it is today, gained an additional 6 degrees. The ocean turned jacuzzi-hot near the equator and experienced mass extinctions worldwide. On land, primitive monkeys, horses and other early mammals marched northward, following vegetation to higher latitudes. The mammals also miniaturized over generations, as leaves became less nutritious in the carbonaceous air. Violent storms ravaged the planet; the geologic record indicates flash floods and protracted droughts. As Kennett put it, “Earth was triggered, and all hell broke loose.” A bright-red stratum of sedimentary rock coursing through the badlands in Wyoming’s Bighorn Basin yielded some of the first fossil evidence of an extreme global warming event 56 million years ago.
Alamy The PETM doesn’t only provide a past example of CO 2 -driven climate change ; scientists say it also points to an unknown factor that has an outsize influence on Earth’s climate. When the planet got hot, it got really hot. Ancient warming episodes like the PETM were always far more extreme than theoretical models of the climate suggest they should have been. Even after accounting for differences in geography, ocean currents and vegetation during these past episodes, paleoclimatologists find that something big appears to be missing from their models—an X-factor whose wild swings leave no trace in the fossil record.
Evidence is mounting in favor of the answer that experts have long suspected but have only recently been capable of exploring in detail. “It’s quite clear at this point that the answer is clouds,” said Matt Huber , a paleoclimate modeler at Purdue University.
Clouds currently cover about two-thirds of the planet at any moment. But computer simulations of clouds have begun to suggest that as the Earth warms, clouds become scarcer. With fewer white surfaces reflecting sunlight back to space, the Earth gets even warmer, leading to more cloud loss. This feedback loop causes warming to spiral out of control.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For decades, rough calculations have suggested that cloud loss could significantly impact climate, but this concern remained speculative until the last few years, when observations and simulations of clouds improved to the point where researchers could amass convincing evidence.
Now, new findings reported this week in the journal Nature Geoscience make the case that the effects of cloud loss are dramatic enough to explain ancient warming episodes like the PETM—and to precipitate future disaster. Climate physicists at the California Institute of Technology performed a state-of-the-art simulation of stratocumulus clouds, the low-lying, blankety kind that have by far the largest cooling effect on the planet. The simulation revealed a tipping point: a level of warming at which stratocumulus clouds break up altogether. The disappearance occurs when the concentration of CO 2 in the simulated atmosphere reaches 1,200 parts per million—a level that fossil fuel burning could push us past in about a century, under “business-as-usual” emissions scenarios. In the simulation, when the tipping point is breached, Earth’s temperature soars 8 degrees Celsius, in addition to the 4 degrees of warming or more caused by the CO 2 directly.
Once clouds go away, the simulated climate “goes over a cliff,” said Kerry Emanuel , a climate scientist at the Massachusetts Institute of Technology. A leading authority on atmospheric physics, Emanuel called the new findings “very plausible,” though, as he noted, scientists must now make an effort to independently replicate the work.
To imagine 12 degrees of warming, think of crocodiles swimming in the Arctic and of the scorched, mostly lifeless equatorial regions during the PETM. If carbon emissions aren’t curbed quickly enough and the tipping point is breached , “that would be truly devastating climate change,” said Caltech’s Tapio Schneider , who performed the new simulation with Colleen Kaul and Kyle Pressel.
Huber said the stratocumulus tipping point helps explain the volatility that’s evident in the paleoclimate record. He thinks it might be one of many unknown instabilities in Earth’s climate. “Schneider and co-authors have cracked open Pandora’s box of potential climate surprises,” he said, adding that, as the mechanisms behind vanishing clouds become clear, “all of a sudden this enormous sensitivity that is apparent from past climates isn’t something that’s just in the past. It becomes a vision of the future.” Clouds come in diverse shapes—sky-filling stratus, popcorn-puff cumulus, wispy cirrus, anvil-shaped nimbus and hybrids thereof—and span many physical scales. Made of microscopic droplets, they measure miles across and, collectively, cover most of the Earth’s surface. By blocking sunlight from reaching the surface, clouds cool the planet by several crucial degrees. And yet, they are insubstantial, woven into greatness by complicated physics. If the planet’s patchy white veil of clouds descended to the ground, it would make a watery sheen no thicker than a hair.
Clouds seem simple at first: They form when warm, humid air rises and cools. The water vapor in the air condenses around dust grains, sea salt or other particles, forming droplets of liquid water or ice—“cloud droplets.” But the picture grows increasingly complicated as heat, evaporation, turbulence, radiation, wind, geography and myriad other factors come into play.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg These irregularly shaped cloud droplets made of ice (left) and spherical cloud droplets made of supercooled liquid water were laser-imaged during a 2018 research flight through stratocumulus clouds over the Southern Ocean.
Courtesy of Dr. Emma Järvinen and Dr. Martin Schnaiter (KIT and schnaiTEC, Germany) Physicists have struggled since the 1960s to understand how global warming will affect the many different kinds of clouds, and how that will influence global warming in turn. For decades, clouds have been seen as by far the biggest source of uncertainty over how severe global warming will be—other than what society will do to reduce carbon emissions.
Kate Marvel contemplates the cloud question at the NASA Goddard Institute for Space Studies in New York City. Last spring, in her office several floors above Tom’s Restaurant on the Upper West Side, Marvel, wearing a cloud-patterned scarf, pointed to a plot showing the range of predictions made by different global climate models. The 30 or so models, run by climate research centers around the world, program in all the known factors to predict how much Earth’s temperature will increase as the CO 2 level ticks up.
Each climate model solves a set of equations on a spherical grid representing Earth’s atmosphere. A supercomputer is used to evolve the grid of solutions forward in time, indicating how air and heat flow through each of the grid cells and circulate around the planet. By adding carbon dioxide and other heat-trapping greenhouse gases to the simulated atmosphere and seeing what happens, scientists can predict Earth’s climate response. All the climate models include Earth’s ocean and wind currents and incorporate most of the important climate feedback loops, like the melting of the polar ice caps and the rise in humidity, which both exacerbate global warming. The models agree about most factors but differ greatly in how they try to represent clouds.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The least sensitive climate models, which predict the mildest reaction to increasing CO 2 , find that Earth will warm 2 degrees Celsius if the atmospheric CO 2 concentration doubles relative to preindustrial times, which is currently on track to happen by about 2050. (The CO 2 concentration was 280 parts per million before fossil fuel burning began, and it’s above 410 ppm now. So far, the average global temperature has risen 1 degree Celsius.) But the 2-degree prediction is the best-case scenario. “The thing that really freaks people out is this upper end here,” Marvel said, indicating projections of 4 or 5 degrees of warming in response to the doubling of CO 2.
“To put that in context, the difference between now and the last ice age was 4.5 degrees.” The huge range in the models’ predictions chiefly comes down to whether they see clouds blocking more or less sunlight in the future. As Marvel put it, “You can fairly confidently say that the model spread in climate sensitivity is basically just a model spread in what clouds are going to do.” Lucy Reading-Ikkanda/Quanta Magazine Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg The problem is that, in computer simulations of the global climate, today’s supercomputers cannot resolve grid cells that are smaller than about 100 kilometers by 100 kilometers in area. But clouds are often no more than a few kilometers across. Physicists therefore have to simplify or “parameterize” clouds in their global models, assigning an overall level of cloudiness to each grid cell based on other properties, like temperature and humidity.
But clouds involve the interplay of so many mechanisms that it’s not obvious how best to parameterize them. The warming of the Earth and sky strengthens some mechanisms involved in cloud formation, while also fueling other forces that break clouds up. Global climate models that predict 2 degrees of warming in response to doubling CO 2 generally also see little or no change in cloudiness. Models that project a rise of 4 or more degrees forecast fewer clouds in the coming decades.
The climatologist Michael Mann , director of the Earth System Science Center at Pennsylvania State University, said that even 2 degrees of warming will cause “considerable loss of life and suffering.” He said it will kill coral reefs whose fish feed millions, while also elevating the risk of damaging floods, wildfires, droughts, heat waves, and hurricanes and causing “several feet of sea-level rise and threats to the world’s low-lying island nations and coastal cities.” At the 4-degree end of the range, we would see not only “the destruction of the world’s coral reefs, massive loss of animal species, and catastrophic extreme weather events,” Mann said, but also “meters of sea-level rise that would challenge our capacity for adaptation. It would mean the end of human civilization in its current form.” It is difficult to imagine what might happen if, a century or more from now, stratocumulus clouds were to suddenly disappear altogether, initiating something like an 8-degree jump on top of the warming that will already have occurred. “I hope we’ll never get there,” Tapio Schneider said in his Pasadena office last year.
In the last decade, advances in supercomputing power and new observations of actual clouds have attracted dozens of researchers like Schneider to the problem of global warming’s X-factor. Researchers are now able to model cloud dynamics at high resolution, generating patches of simulated clouds that closely match real ones. This has allowed them to see what happens when they crank up the CO 2.
First, physicists came to grips with high clouds—the icy, wispy ones like cirrus clouds that are miles high. By 2010, work by Mark Zelinka of Lawrence Livermore National Laboratory and others convincingly showed that as Earth warms, high clouds will move higher in the sky and also shift toward higher latitudes, where they won’t block as much direct sunlight as they do nearer the equator. This is expected to slightly exacerbate warming, and all global climate models have integrated this effect.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg But vastly more important and more challenging than high clouds are the low, thick, turbulent ones — especially the stratocumulus variety. Bright-white sheets of stratocumulus cover a quarter of the ocean, reflecting 30 to 70 percent of the sunlight that would otherwise be absorbed by the dark waves below. Simulating stratocumulus clouds requires immense computing power because they contain turbulent eddies of all sizes.
A research aircraft flying through stratocumulus clouds off the coast of Chile during a 2008 mission to gather data about the interactions between clouds, aerosols, atmospheric boundary layers, wind currents and other aspects of the Southeast Pacific climate.
Robert Wood Chris Bretherton , an atmospheric scientist and mathematician at the University of Washington, performed some of the first simulations of these clouds combined with idealized climate models in 2013 and 2014. He and his collaborators modeled a small patch of stratocumulus and found that as the sea surface below it warmed under the influence of CO 2 , the cloud became thinner. That work and other findings—such as NASA satellite data indicating that warmer years are less cloudy than colder years—began to suggest that the least sensitive global climate models, the ones predicting little change in cloud cover and only 2 degrees of warming, probably aren’t right.
Bretherton, whom Schneider calls “the smartest person we have in this area,” doesn’t only develop some of the best simulations of stratocumulus clouds; he and his team also fly through the actual clouds, dangling instruments from airplane wings to measure atmospheric conditions and bounce lasers off of cloud droplets.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg In the Socrates mission last winter, Bretherton hopped on a government research plane and flew through stratocumulus clouds over the Southern Ocean between Tasmania and Antarctica. Global climate models tend to greatly underestimate the cloudiness of this region, and this makes the models relatively insensitive to possible changes in cloudiness.
Bretherton and his team set out to investigate why Southern Ocean clouds are so abundant. Their data indicate that the clouds consist primarily of supercooled water droplets rather than ice particles, as climate modelers had long assumed. Liquid-water droplets stick around longer than ice droplets (which are bigger and more likely to fall as rain), and this seems to be why the region is cloudier than global climate models predict. Adjusting the models to reflect the findings will make them more sensitive to cloud loss in this region as the planet heats up. This is one of several lines of evidence, Bretherton said, “that would favor the range of predictions that’s 3 to 5 degrees, not the 2- to 3-degree range.” Schneider’s new simulation with Kaul and Pressel improved on Bretherton’s earlier work primarily by connecting what happens in a small patch of stratocumulus cloud to a simple model of the rest of Earth’s climate. This allowed them to investigate for the first time how these clouds not only respond to, but also affect, the global temperature, in a potential feedback loop.
Tapio Schneider, Colleen Kaul and Kyle Pressel, of the California Institute of Technology, identified a tipping point where stratocumulus clouds break up.
California Institute of Technology (Schneider); Courtesy of Colleen Kaul; Courtesy of Kyle Pressel Their simulation, which ran for 2 million core-hours on supercomputers in Switzerland and California, modeled a roughly 5-by-5-kilometer patch of stratocumulus cloud much like the clouds off the California coast. As the CO 2 level ratchets up in the simulated sky and the sea surface heats up, the dynamics of the cloud evolve. The researchers found that the tipping point occurs, and stratocumulus clouds suddenly disappear, because of two dominant factors that work against their formation. First, when higher CO 2 levels make Earth’s surface and sky hotter, the extra heat drives stronger turbulence inside the clouds. The turbulence mixes moist air near the top of the cloud, pushing it up and out through an important boundary layer that caps stratocumulus clouds, while drawing dry air in from above. Entrainment, as this is called, works to break up the cloud.
Secondly, as the greenhouse effect makes the upper atmosphere warmer and thus more humid, the cooling of the tops of stratocumulus clouds from above becomes less efficient. This cooling is essential, because it causes globs of cold, moist air at the top of the cloud to sink, making room for warm, moist air near Earth’s surface to rise into the cloud and become it. When cooling gets less effective, stratocumulus clouds grow thin.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Countervailing forces and effects eventually get overpowered; when the CO 2 level reaches about 1,200 parts per million in the simulation—which could happen in 100 to 150 years, if emissions aren’t curbed—more entrainment and less cooling conspire to break up the stratocumulus cloud altogether.
To see how the loss of clouds would affect the global temperature, Schneider and colleagues inverted the approach of global climate models, simulating their cloud patch at high resolution and parameterizing the rest of the world outside that box. They found that, when the stratocumulus clouds disappeared in the simulation, the enormous amount of extra heat absorbed into the ocean increased its temperature and rate of evaporation. Water vapor has a greenhouse effect much like CO 2 , so more water vapor in the sky means that more heat will be trapped at the planet’s surface. Extrapolated to the entire globe, the loss of low clouds and rise in water vapor leads to runaway warming—the dreaded 8-degree jump. After the climate has made this transition and water vapor saturates the air, ratcheting down the CO 2 won’t bring the clouds back. “There’s hysteresis,” Schneider said, where the state of the system depends on its history. “You need to reduce CO 2 to concentrations around present day, even slightly below, before you form stratocumulus clouds again.” Paleoclimatologists said this hysteresis might explain other puzzles about the paleoclimate record. During the Pliocene, 3 million years ago, the atmospheric CO 2 level was 400 ppm, similar to today, but Earth was 4 degrees hotter. This might be because we were cooling down from a much warmer, perhaps largely cloudless period, and stratocumulus clouds hadn’t yet come back.
Schneider emphasized an important caveat to the study, which will need to be addressed by future work: The simplified climate model he and his colleagues created assumed that global wind currents would stay as they are now. However, there is some evidence that these circulations might weaken in a way that would make stratocumulus clouds more robust, raising the threshold for their disappearance from 1,200 ppm to some higher level. Other changes could do the opposite, or the tipping point could vary by region.
To better “capture the heterogeneity” of the global system, Schneider said, researchers will need to use many simulations of cloud patches to calibrate a global climate model. “What I would love to do, and what I hope we’ll get a chance to do, is embed many, many of these [high-resolution] simulations in a global climate model, maybe tens of thousands, and then run a global climate simulation that interacts with” all of them, he said. Such a setup would enable a more precise prediction of the stratocumulus tipping point or points.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg A simulation of stratocumulus clouds in a 3-by-3-kilometer patch of sky, as seen from below.
Kyle Pressel There’s a long way to go before we reach 1,200 parts per million, or thereabouts. Ultimate disaster can be averted if net carbon emissions can be reduced to zero—which doesn’t mean humans can’t release any carbon into the sky. We currently pump out 10 billion tons of it each year, and scientists estimate that Earth can absorb about 2 billion tons of it a year, in addition to what’s naturally emitted and absorbed. If fossil fuel emissions can be reduced to 2 billion tons annually through the expansion of solar, wind, nuclear and geothermal energy, changes in the agricultural sector, and the use of carbon-capture technology, anthropogenic global warming will slow to a halt.
What does Schneider think the future will bring? Sitting in his office with his laptop screen open to a mesmerizing simulation of roiling clouds, he said, “I am pretty—fairly—optimistic, simply because I think solar power has gotten so much cheaper. It’s not that far away from the cost curve for producing electricity from solar power crossing the fossil fuel cost curve. And once it crosses, there will be an exponential transformation of entire industries.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Kerry Emanuel, the MIT climate scientist, noted that possible economic collapse caused by nearer-term effects of climate change might also curtail carbon emissions before the stratocumulus tipping point is reached.
But other unforeseen changes and climate tipping points could accelerate us toward the cliff. “I’m worried,” said Kennett, the pioneering paleoceanographer who discovered the PETM and unearthed evidence of many other tumultuous periods in Earth’s history. “Are you kidding? As far as I’m concerned, global warming is the major issue of our time.” During the PETM, mammals, newly ascendant after the dinosaurs’ downfall, actually thrived. Their northward march led them to land bridges that allowed them to fan out across the globe, filling ecological niches and spreading south again as the planet reabsorbed the excess CO 2 in the sky and cooled over 200,000 years. However, their story is hardly one we can hope to emulate. One difference, scientists say, is that Earth was much warmer then to begin with, so there were no ice caps to melt and accelerate the warming and sea-level rise.
“The other big difference,” said the climatologist Gavin Schmidt , director of the Goddard Institute, “is, we’re here, and we’re adapted to the climate we have. We built our cities all the way around the coasts; we’ve built our agricultural systems expecting the rain to be where it is and the dry areas to be where they are.” And national borders are where they are. “We’re not prepared for those things to shift,” he said.
Original story reprinted with permission from Quanta Magazine , an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
The triumphant rediscovery of the biggest bee on Earth The Hyundai Nexo is a gas to drive—and a pain to fuel ATM hacking has gotten so easy, the malware's a game The best backpacks— for every kind of workplace Your boring, everyday life belongs on social media 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories Topics climate change Ramin Skibba Jim Robbins Matt Simon Swapna Krishna Emily Mullin Maryn McKenna Erica Kasper Matt Reynolds Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
217 | 2,022 | "President Reif will step down | MIT Technology Review" | "https://www.technologyreview.com/2022/04/27/1048461/president-reif-will-step-down" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts President Reif will step down His decade-long tenure reinforced MIT’s status as an engine of innovation.
By Steve Bradt archive page Christine Daniloff, MIT L. Rafael Reif announced in February that he plans to conclude his term as president of MIT at the end of 2022 and return to the faculty following a sabbatical.
Reif, a Stanford-educated electrical engineer whose parents fled Nazi Europe for Venezuela, joined MIT in 1980 and served as provost for seven years before becoming MIT’s 17th president in 2012.
During his tenure, he catalyzed pioneering efforts to prioritize student well-being, oversaw a revitalization of the campus and neighboring Kendall Square, steered MIT safely through both political turmoil and the covid-19 pandemic, and cemented its status as one of the world’s leading centers of innovation.
“Thanks to the exceptional efforts and impact of the people of MIT in research, education, and innovation, the Institute consistently ranks among the very top universities in the world,” Reif wrote in a letter to the MIT community. “We can all be proud of these collective achievements.” Among the ventures that took shape under Reif’s leadership was The Engine, a business incubator designed to support startups working on potentially transformative ideas that take time and “patient” capital to commercialize because they are based on new science. To date, it has $670 million in assets under management.
Reif also announced a $1 billion commitment to address global opportunities and challenges presented by the growing prevalence of computing and the rise of artificial intelligence. At the heart of this endeavor was the most significant restructuring of the Institute in 70 years: the creation of the MIT Stephen A. Schwarzman College of Computing, which promised to bring the power of computing and AI to all fields of study while educating students in every discipline to use those technologies to help make a better world.
EdX, an ambitious partnership between MIT and Harvard to deliver free online education to learners anywhere in the world, launched just before his presidency. It was reorganized as a public benefit company as covid-19 prompted a surge in remote learning.
Under Reif’s watch, MIT’s Campaign for a Better World raised $6.24 billion and the Institute’s endowment grew from $10.3 billion to $27.4 billion—reflecting both strong fundraising and the endowment’s spectacular investment performance.
Reif also led MIT to take decisive action to address sexual assault and mental health on campus through initiatives including the MindHandHeart coalition. Amid the domestic political turbulence that marked the second half of his presidency, he defended the Institute’s international students and researchers against new federal policies affecting immigrants and international travel.
He also grappled with difficult and painful events, including the killing of campus police officer Sean Collier in the aftermath of the 2013 Boston Marathon bombing and the revelations that MIT had accepted donations from Jeffrey Epstein, even after his conviction as a sex offender. When the campus was forced into an abrupt shutdown by the global spread of covid-19, Reif led the Institute community to understand the need for this unprecedented disruption; then he and his team guided it through a phased reopening that helped minimize the on-campus spread of the disease.
“Rafael Reif is a pioneer whose innovations at MIT have set new global standards in research and education,” says Shirley Ann Jackson ’68, PhD ’73, outgoing president of Rensselaer Polytechnic Institute and a life member of the MIT Corporation. “He has consistently stood up for the MIT community and articulated powerfully the value of attracting talent from every corner.” hide by Steve Bradt Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our May/June 2022 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus.
By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products.
By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms.
By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
218 | 2,022 | "How Bitcoin mining devastated this New York town | MIT Technology Review" | "https://www.technologyreview.com/2022/04/18/1049331/bitcoin-cryptocurrency-cryptomining-new-york" | "Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts How Bitcoin mining devastated this New York town Between rising electricity rates and soaring climate costs, cryptomining is taking its toll on communities.
By Lois Parshley archive page Gabriela Bhaskar If, in 2017, you had taken a gamble and purchased a comparatively new digital currency called Bitcoin, today you would be a millionaire many times over. But while the industry has provided windfalls for some, local communities have paid a price.
Cryptocurrency is created by computers solving complicated mathematical equations—a process that took off after a Chinese company called Bitmain started selling a machine in 2016 with application-specific integrated circuits that made it possible to do this specialized computing much more quickly.
“Almost overnight,” says Colin Read, a professor of economics and finance at the State University of New York at Plattsburgh, “a crypto-mining arms race began.” Each Bitcoin transaction consumes 1,173 kilowatt-hours People began scouring the world for cheap sources of energy to run large Bitcoin-mining farms using these circuits. Cryptocurrency notoriously devours electricity; each Bitcoin transaction consumes 1,173 kilowatt-hours —more than the average American uses in a month. In 2020, the world’s crypto mining required more energy than the whole of Switzerland.
At the time, Plattsburgh had some of the least expensive power anywhere in the United States, thanks to cheap hydroelectricity from the Niagara Power Authority.
It didn’t take long for a subsidiary of the popular mining firm Coinmint to lease a Family Dollar store in Plattsburgh. The city’s building inspector, Joe McMahon, remembers that the man who signed the lease, Prieur Leary, wanted everything done quickly. “Overnight, he wanted power on,” McMahon says. “We were all uneasy about it but didn’t know the harm.” Coinmint filled the building with servers, running them 24 hours a day. When the miners wanted to expand into a nearby shopping center, Bill Treacy, the manager of the Plattsburgh municipal lighting department, told them that they would have to invest $140,000 in new infrastructure. He was surprised when they weren’t discouraged. Soon, the company was regularly drawing over 10 megawatts, enough power for about 4,000 homes.
Other miners were quick to follow. Treacy recalls one prospector calling to see if he could get five gigawatts—“I said, ‘Excuse me. That’s a quarter of what New York state uses on a given day!” Plattsburgh was soon receiving a major mining application every week.
In January 2018, there was a cold snap. People turned up their heat and plugged in space heaters. The city quickly exceeded its quota of hydropower, forcing it to buy power elsewhere at much higher rates. McMahon says his Plattsburgh home’s energy bill jumped by $30 to $40 a month. “People felt there was a problem but didn’t know what to attribute it to,” he says.
As the long winter began to thaw, neighbors noticed a new disturbance: mining servers generate an extreme amount of heat, requiring extensive ventilation to avert shutoffs. Those fans generated a constant, high-frequency whine, McMahon says, “like a small-engine plane getting ready to take off.” It wasn’t just the decibels, but the pitch: “It registers at this weird level, like a toothache that won’t go away.” Carla Brancato lives across the river from Zafra, a crypto-mining and hosting company owned by Plattsburgh resident Ryan Brienza. She says that for several years her condo vibrated from its noise, as if someone were constantly running a vacuum upstairs.
Meanwhile, the automated nature of these servers meant that the new mines provided few local jobs. “I’m pro–economic development,” Read says, “but the biggest mine operation has fewer jobs than a new McDonald’s.” Plattsburgh doesn’t have a city income tax, and most miners lease their buildings, meaning they aren’t paying property taxes. Elizabeth Gibbs, a city councilor, was shocked when she went to tour one of the operations. “I was blown away by how hot it was—so hot and so loud,” she says. She describes a warehouse filled with hundreds of servers in stacks, connected by umbilical-like wires, with doors and windows left wide open to let cool air in.
Read, who became mayor in 2017, decided to impose a moratorium on new crypto mines until the city could figure out what to do. First, the New York Public Service Commission created a rider requiring high-density users to pay higher rates. It also required crypto companies to cover specialized infrastructure up front and put down a security deposit to ensure that their bills got paid. Based on two months of electricity use, Coinmint’s deposit was $1,019,503. The company spent two years pursuing appeals with the New York State Department of Public Service. “In the end, they lost,” Treacy says.
Next, Plattsburgh updated its building codes and noise ordinances. (As an established business, Coinmint voluntarily agreed to work with the city.) Brienza, for his part, doesn’t think the moratorium was necessary. “The city could have attracted a lot of business,” he says. Zafra’s new facility, he says, has made noise reduction a priority; Brancato says after the city worked with Zafra to turn down its fans last summer, her home is finally quiet.
Now Plattsburgh is again accepting new crypto-mine applications. Yet with the new regulations in place, they’ve seen little interest. Instead, mining has surged in the nearby town of Massena, where Coinmint signed a long-term lease for a former Alcoa aluminum plant. In 2021, Massena also halted new crypto-associated businesses. “Our goal is not to prevent business, but to make sure the character and safety of our town is protected,” wrote a town board member in an emailed statement.
From 2016 to 2018, crypto mining in upstate New York increased annual electric bills by about $165 million for small businesses and $79 million for individuals, a recent paper found. “Obviously if you’re an investor, you see the value of crypto,” McMahon says, “but me, living in this community? I don’t.” Related Story One of the world’s biggest blockchains is testing a new way to approve transactions. The move has been many years in the making but doesn’t come without risks.
Economist Matteo Benetton, a coauthor of the paper and a professor at the Hass School of Business at the University of California, Berkeley, says that crypto mining can depress local economies. In places with fixed electricity supplies, operations suck up grid capacity, potentially leading to supply shortages, rationing, and blackouts.
Even in places with ample access to power, like upstate New York, mining can crowd out other potential industries that might have employed more people. “While there are private benefits, through the electricity market, there are social costs,” Benetton says.
These impacts are now being felt across the country. Benetton says there are strong profit incentives to keep as many servers running as possible, and he is now calling for greater transparency in these companies’ energy usage. That’s not a popular opinion within the industry. But, says Benetton, “if you’re really doing good, you shouldn’t be afraid to disclose the data.” The federal government does not currently monitor cryptocurrency’s energy consumption, but Securities and Exchange Commission chair Gary Gensler recognizes that there are gaps in regulation. In a 2021 speech at the Aspen Security Forum, he referred to the industry as “the Wild West.” As long as mining is so profitable, Read warns, crypto bans just shift the harm to new locations. When China banned crypto mining in 2021 to achieve its carbon reduction goals, operations surged in places like Kazakhstan, where electricity comes primarily from coal. As a result, a recent study found, Bitcoin’s use of renewable energy dropped by about half between 2020 and 2021, down to 25%.
Even when the industry invests in renewable energy, its sheer consumption makes it a significant contributor of carbon emissions.
Read dismisses the promises that green investments or greater efficiencies can solve this problem. In a recent working paper, he found that cryptocurrency’s energy usage will rise another 30% by the end of the decade—producing an additional 32.5 million metric tons of carbon dioxide a year. As long as the price of Bitcoin goes up, the rewards of mining increase, which spurs energy use, he says. He refers to this situation as “the Bitcoin Dilemma.” Those 32 million metric tons of carbon dioxide will make the climate crisis even worse, whether the emissions are coming from upstate New York or Kazakhstan. “We all suffer as a consequence,” says Read.
Correction: An earlier version of this story said that each Bitcoin transaction consumes 1,173 kilowatts. The correct unit is kilowatt-hours.
Lois Parshley is an investigative science journalist.
hide by Lois Parshley Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our May/June 2022 issue.
Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Climate change and energy Think that your plastic is being recycled? Think again.
Plastic is cheap to make and shockingly profitable. It’s everywhere. And we’re all paying the price.
By Douglas Main archive page 15 Climate Tech Companies to Watch By Amy Nordrum archive page 2023 Climate Tech Companies to Watch: Blue Frontier and its energy-efficient AC The startup's AC units suck moisture out of the air for more efficient cooling.
By Amy Nordrum archive page Oyster fight: The humble sea creature could hold the key to restoring coastal waters. Developers hate it.
Revitalizing oyster farms and wild oyster reefs could undo decades of environmental destruction on our coasts By Anna Kramer archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more.
Enter your email Thank you for submitting your email! It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.
The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window
" |
219 | 2,017 | "Women in Machine Learning - Deep Learning Indaba 2018" | "https://deeplearningindaba.com/2018/indaba/women-in-ai" | "Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Indaba, Stellenbosch, SOUTH AFRICA 09 Sep - 14 Sep 2018 Strengthening African Machine Learning Women in Machine Learning Kindly hosted by: 19:00-22:00, Monday 10 September 2018 Venue: STIAS (Stellenbosch Institute for Advanced Study, South Africa) Join our fantastic speakers at an event to encourage, support and unite women in machine learning, while highlighting diverse career paths: from academia, to industrial research, to applied machine learning, and start-ups. Our panellists will each describe their personal career journey, and their experiences as a woman in machine learning, followed by a panel discussion, Q&A from the audience and a chance to network. Free and open to all conference attendees.
Sarah Brown Brown University Dr. Sarah Brown is a Postdoctoral Research Associate in the Data Science Initiative at Brown University (USA) affiliated to the Division of Applied Mathematics. Dr. Brown received her BS, MS, and PhD degrees in Electrical Engineering from Northeastern University. Dr Brown builds machine learning tools that bridge from data-agnostic methods to systems that fuel data driven discovery in historically qualitative domains. Her work approaches this from two fronts: building interfaces that enable my algorithms to leverage domain scientists’ qualitative expertise and developing model-based machine learning solutions through close collaboration with domain scientists. Sarah has been an instructor with The Carpentries since November 2017 serves as a member of the Lesson Infrastructure Committee. Currently she serves as treasurer and previously as a workshop organizer for Women In Machine Learning, Inc. Previously, she has served as general co-chair of the Broadening Participation in data mining Program, a founding member of the Black in AI organizing committee, and in various leadership roles in the National Society of Black Engineers.
Kathleen Siminyu Data Scientist Africa's Talking; Co-organiser, Nairobi Machine Learning and Data Science Kathleen is a data scientist and machine learning engineer who is Regional Coordinator for the Artificial Intelligence for Development – Africa Network. She is Co-Founder and Co-Organiser of the Nairobi Women in Machine Learning and Data Science community as well as part of the Deep Learning Indaba leadership.
Kathleen is also currently a Masters student at the Georgia Institute of Technology undertaking the Online Masters in Computer Science with a specialization in Computational Perception and Robotics. She is keen on investing time and effort in ventures that involve natural language processing for African languages as well as low-cost hardware robotics.
She can be reached on twitter @siminyu_kat and on LinkedIn.
Konstantina Palla Researcher in the Healthcare ML Division at Microsoft Research Cambridge Konstantina is a Machine Learning Researcher in the Healthcare ML Division at Microsoft Research Cambridge (UK). Her research is focusing on the construction and application of Bayesian probabilistic models for discovering latent structure in data. Recently, she has been particularly interested in the application of probabilistic modelling in the Healthcare domain as a means to understand disease subtypes and patients’ subgroups. In her PhD, she developed nonparametric models for relational data with a focus on time evolving settings.
Muthoni Wanyoike Code for Africa; and Co-organiser Nairobi Machine Learning and Data Science Muthoni Wanyoike, is the team lead at Instadeep in Kenya. She is passionate about bridging the skills gap in AI in Africa and does this by co-organizing the Nairobi Women in Machine Learning community. Through the community, we are able to provide learning, mentorship, networking and job opportunities for people with interests and working in AI. She is experienced in Research, Data Analytics, community and project management and community growth hacking.
Tempest van Schaik Microsoft Commercial Software Engineering Tempest van Schaik is a multi-disciplinary engineer with experience in the end-to-end development of health technology, from wet-lab research, to medical devices, to clinical UX, and medical data science. She currently works as a machine learning engineer at Microsoft, London, focusing on healthcare & biosciences. She puts machine learning into practice in close collaboration with clinical & pharmaceutical researchers to solve diverse, real-world problems. Recently she has used ML to better understand the physiotherapy of Cystic Fibrosis.
She has degrees in Biomedical Engineering, and Electrical (Information) Engineering from University of the Witwatersrand (South Africa), and a PhD in Bioengineering from Imperial College London. She is an Ambassador for Diversity & Inclusion for her team at Microsoft (Commercial Software Engineering).
You can find her on Twitter: @Dr_Tempest Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
" |
220 | 2,017 | "Speakers - Deep Learning Indaba 2018" | "https://deeplearningindaba.com/2018/indaba/speakers" | "Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Speakers The 2018 edition of Deep Learning Indaba welcomed 33 brilliant speakers who took time from their busy lives to attend the conference.
Plenary Speakers "new">Asja Fischer Assistant Professor, Ruhr-University Bochum "new">Katja Hofmann Researcher, Microsoft Research Cambridge "new">Kyunghyun Cho Assistant Professor, New York University, and Research scientist at Facebook AI Research "new">Jeff Dean Senior Fellow, Google AI. Google Brain lead and co-founder. Co-designer and implementor of Tensorflow, MapReduce, BigTable, Spanner.
"new">Moustapha Cisse Team Lead, Google AI Ghana; Lead, African Masters in Machine Intelligence, AIMS "new">Naila Murray Senior scientist, Naver Labs Europe "new">Nando De Freitas Principal Scientist and Team Lead, DeepMind; and Deep Learning Indaba advisory board member "new">Yabebal Fantaye Junior Research Chair, African Institute for Mathematical Sciences, South Africa Speakers for Mathematics for Machine Learning Learn the key concepts of probability and differential calculus necessary to gain the most from the rest of the week’s programme. This session will be split into 5 smaller classes run in parallel (each covering the same material).
"new">Cynthia Mulenga Product Manager, Mwabu Zambia; Trainer, Asikana Network; Co-lead Facebook Developer Circle Lusaka "new">Daniela Massiceti PhD Candidate, University of Oxford "new">Kathleen Siminyu Data Scientist Africa's Talking; Co-organiser, Nairobi Machine Learning and Data Science "new">Kendi Muchungi Programme Leader, Africa Nazarene University "new">Avishkar Bhoopchand Research Engineer, DeepMind AI and Africa In this session we wish to highlight the ongoing as well as new directions of AI/ML work happening in Africa. This session will have two sections, first we will have short presentations 5-10 minutes from senior representatives from AI/ML/Policy companies and organisation (both NGO and governmental). These presentations are meant to provide an overview of what each organisation offers, highlight their successes, learn from challenges encountered and understand opportunities discovered. Following the presentations, we will have a Q&A session to open up the conversation to the audience.
"new">Sumir Panji Network Manager, H3-Africa "new">David Sengeh Chief Innovation Officer at Government of Sierra-Leone "new">Jon Lenchner Chief Scientist, IBM Research Africa Generative Models and Healthcare This sessions builds on the understanding of generative models, with a focus on applications in healthcare. For the first half, Konstantina will discuss the role of probabilistic thinking, uncertainty and causality, and then look at how these tools can be used to build personalised healthcare tools. In the second part, Shakir will recap the area of generative models, specifically the algorithms for LDA, VAEs and GANs, and then look at how these can be applied in healthcare settings ranging from analysis of electronic health records, medical notes, in drug discovery, and in medical imaging.
"new">Konstantina Palla Researcher in the Healthcare ML Division at Microsoft Research Cambridge "new">Shakir Mohamed Research Scientist, DeepMind, London Special Session on Reinforcement Learning "new">David Silver Principal Scientist, DeepMind Natural Language Processing Learn about the recent history of NLP and discuss the biggest open problems in NLP with a panel of experts.
"new">Herman Kamper Lecturer, Stellenbosch University "new">Sebastian Ruder PhD Candidate, Insight Research Centre for Data Analytics; Research scientist, AYLIEN, Dublin AI Ethics and Policy This session will tackle the intersections of AI/ML, Ethics and Policy on the continent. Session will be a blend of a practical interactive Ethics session, a talk on fairness and robust discussion via an expert panel made up of researchers, practitioners, policy makers. At the end of the day, we would like to answer: How do we work to inject our own values into AI/ML development in Africa, allow a progressive environment for development and protect our communities? "new">Timnit Gebru Research Scientist, Google AI; Black in AI "new">Osonde Osoba Engineer, Rand Corporation; Professor, Pardee RAND Graduate School "new">Mmaki Jantjies Senior Lecturer, University Western Cape "new">Linet Kwamboka Founder and CEO DataScience Ltd "new">Vukosi Marivate Chair of Data Science, University of Pretoria; CSIR; and Deep Learning Indaba Machine Learning in Production Learn the tricks of the trade for deploying and scaling ML models in a production environment from experienced practitioners.
"new">Omoju Miller Senior Data Scientist, GitHub "new">Stuart Reid Chief Scientist and partner, NMRQL Research "new">Amine Kerkerni AI Product Development Lead, InstaDeep Frontiers of Computer Vision Learned the basics of Convolutional Neural Networks? Want to know go beyond? Join to extend your understanding of CNNs and how they extract image features for higher-level computer vision tasks like object detection, localisation and semantic segmentation. Following this, we invite a panel of computer vision experts to give their personal insights, advice and expert views on the frontiers of the field: what are the biggest unsolved problems in computer vision, how are they relevant to Africa, and where should African researchers be directing their energy to solve these problems. Also join to hear short spotlight talks given by fellow Indaba attendees – a great opportunity to learn more about current state-of-the-art methods being used in computer vision! "new">Sara Hooker Google AI Ghana Life of a Machine Learning Startup "new">Andrea Böhmert Co-Managing Partner Knife Capital "new">Karim Beguir Co-founder and CEO of InstaDeep, Google Developer Machine Learning Expert Reinforcement Learning II "new">Benjamin Rosman Researcher, Council for Scientific and Industrial Research Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
" |
221 | 2,017 | "Programme - Deep Learning Indaba 2018" | "https://deeplearningindaba.com/2018/indaba/programme" | "Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Indaba, Stellenbosch, SOUTH AFRICA 09 Sep - 14 Sep 2018 Strengthening African Machine Learning Programme This is the official programme of the 2018 Deep Learning Indaba.
You can find videos and slides in our resources about these sessions.
Sunday, 09 September Welcome Day and Background 09:00 - 18:00 Arrivals in Stellenbosch and check-in 11:00 - 14:00 Registration 14:00 - 16:00 Practical: Machine Learning Basics Stephan Gouws; Avishkar Bhoopchand; Ulrich Paquet, DeepMind 16:30 - 18:30 Mathematics for Machine Learning 18:30 - 19:00 Transfer to Evening Welcome Event 19:00 - Evening Welcome Event Tuesday, 10 September Fundamentals and Convolutional Models 07:30 - 08:30 Registration and Morning Coffee 08:30 - 09:30 Indaba Opening and Welcome Keynote Nando de Freitas, DeepMind 09:30 - 10:30 Deep Learning Fundamentals 1 Moustapha Cisse, Google 10:30 - 11:00 Break 11:00 - 12:00 Deep Learning Fundamentals 2 Moustapha Cisse, Google 12:00 - 14:00 Lunch and Poster Session 14:00 - 16:00 Practical 1: Deep Feedforward Models Stephan Gouws; Avishkar Bhoopchand; Ulrich Paquet, DeepMind 16:00 - 16:30 Break 16:30 - 18:30 Convolutional Models Naila Murray, Naver Labs 18:30 - 19:30 Free Time 19:30 - 21:00 AlphaGo Movie Screening 19:00 - 22:00 Women in Machine Learning Wednesday, 11 September Recurrent Models 07:30 - 08:30 Morning Coffee 08:30 - 10:30 Practical 2: Convolutional Networks Stephan Gouws; Avishkar Bhoopchand; Ulrich Paquet, DeepMind 10:30 - 11:00 Break 11:00 - 12:00 Probabilistic Thinking Yabebal Fantaye, AIMS South Africa 12:00 - 14:00 Lunch and Poster Session 14:00 - 16:00 Recurrent Neural Networks Kyunghyun Cho, New York University 16:00 - 16:30 Break 16:30 - 18:30 Practical 3: Recurrent Models Stephan Gouws; Avishkar Bhoopchand; Ulrich Paquet, DeepMind 18:30 - 19:00 Free Time 19:30 - 21:00 AlphaGo Movie Screening 19:30 - 21:30 Topical session: "How to write a great research paper" Thursday, 12 September Generative Models and Reinforcement Learning 07:30 - 08:30 Morning Coffee 08:30 - 10:30 Generative Models Asja Fischer, University of Bochum 10:30 - 11:00 Break 11:00 - 12:00 Kambule and Maathai Awards session 12:00 - 14:00 Lunch and Poster Session 14:00 - 16:00 Reinforcement Learning Katja Hofmann, Microsoft Research Cambridge 16:00 - 16:30 Break 16:30 - 18:30 Practical 4: Reinforcement Learning Stephan Gouws; Avishkar Bhoopchand; Ulrich Paquet, DeepMind 18:30 - 19:00 Free Time 19:30 - 21:00 AlphaGo Movie Screening 19:00 - Mentorship Evening Events (x2) Friday, 13 September Parallel Tracks 07:30 - 08:30 Morning Coffee 08:30 - 10:30 Parallel: Non-Recurrent Sequence Models Kyunghyun Cho and others 08:30 - 10:30 Parallel: Frontiers of Computer Vision Daniela Massiceti, University of Oxford; Sara Hooker, Google Brain, Saumya Jetley, University of Oxford 08:30 - 10:30 Parallel: AI for Africa Sumir Panji, Linet Kwamboka, David Sengeh, Jon Lenchner, Moustapha Cisse 10:30 - 11:00 Break 11:00 - 12:00 Democratizing Machine Learning on the African Continent 12:00 - 14:00 Lunch and Poster Session 14:00 - 16:00 Success Stories of Reinforcement Learning David Silver, DeepMind 16:00 - 16:30 Break 16:30 - 18:30 Parallel: Generative Models and Healthcare Konstantina Palla, Microsoft Research; Shakir Mohamed, DeepMind 16:30 - 18:30 Parallel: Natural Language Processing Herman Kamper, University of Stellenbosch; Sebastian Ruder, AYLIEN 16:30 - 18:30 Parallel: Life of a Machine Learning Startup Andrea Böhmert, Knife Capital; Karim Beguir, InstaDeep; Blaise Thomson, Apple 18:30 - 19:00 Transfer to Evening Event 19:00 - Evening Farewell Event Saturday, 14 September Closing 07:30 - 08:30 Morning Coffee 08:30 - 10:30 Parallel: Reinforcement Learning II Talk: Benjamin Rosman, CSIR. Panel: David Silver, DeepMind; Katja Hofmann, Microsoft Research Cambridge 08:30 - 10:30 Parallel: Machine Learning in Production Stuart Reid, NMRQL Research; Amine Kerkerni, InstaDeep; Omoju Miller, GitHub 08:30 - 10:30 Parallel: AI Ethics and Policy Facilitators: Vukosi Marivate; Muthoni Wanyoike Speakers: Timnit Gebru; Mmaki Jantjies; Osonde Osoba; Linet Kwamboka 10:30 - 11:00 Break 11:00 - 13:00 TensorFlow and Real Life Machine Learning Jeff Dean, Google 13:00 - 13:30 Closing, Poster Prizes, Farewell 13:30 - 14:30 Lunch Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
" |
222 | 2,017 | "Maathai Impact Award - Deep Learning Indaba 2018" | "https://deeplearningindaba.com/2018/awards/mathai-impact-award" | "Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Awards Key Links Apply for Maathai Award Submit supporting letters The Maathai award encourages and recognises work by African innovators that shows impactful application of machine learning and artificial intelligence.
This award reinforces the legacy of Wangari Maathai in acknowledging the capacity of individuals to be a positive force for change: by recognising ideas and initiatives that demonstrate that each of us, no matter how small, can make a difference.
The award will be presented at the annual Deep Learning Indaba in September 2018. We welcome nominations from both individuals, teams and organisations. The winner will be awarded a trophy and a cash prize.
Eligibility The award is open to individuals, teams, or organisations.
The Awards Committee considers impactful work to be broadly defined as any work — technological, social, or economic — that has had, or has the potential to positively transform our African societies. There are many ways to have impact, and we hope that those who will submit nominations/self-nominations will be generous and creative in their judgement of the term ‘innovation’.
Examples could include: A research paper that shows important results using machine learning to solve important problems that address food security.
The work of an African startup using machine learning, whose work is set to have positive impact or demonstrate technical excellence in their focus area.
An individual who has shown a track record of empowering individuals and groups affected or involved with machine learning.
Government agencies or individuals contributing positively to the policy and society conversations around machine learning and artificial intelligence.
A non-profit organisation that empowers innovators through skills development or mentoring.
An established business that has deployed machine learning in an innovative way to positively impact their business and customer experience.
Nominations can be sent by anyone, including individuals and organisations themselves.
Nominations are welcomed from any African country.
All supporting letters should be in English (or a translation supplied).
Next Deadline 30 April 2018, 11:59pm, Central African Time (CAT).
Selection Criteria Nominations will be reviewed to assess the breadth of potential impact they have had, their role in strengthening African machine learning and artificial intelligence, and the strength of the supporting letters.
Submissions Nomination is made by completing an online nomination form, which includes details of the impactful work and the necessary contact details. For any teams/organisations, a principal contact should be listed.
Two supporting letters that describes the nature of the impactful work, why it is considered to be impactful, and in what way the nominated candidate(s)/organisation strengthens African machine learning, and any other relevant information. Letter writers can be from anyone familiar with the impactful work. Letters should be 600 words at most, and be submitted using the online form.
Further Questions and Contact If you have any questions, or need clarification on any part of this call for nominations, don’t hesitate to contact us at [email protected] In Honour of Prof. Wangari Muta Maathai Professor Wangari Muta Maathai, Africa’s first female Nobel Laureate, is internationally recognised for her contributions to democracy, peace and sustainable development in Kenya and across the greater African collective. Born in rural Kenya, she became the first woman in East and Central Africa to earn a doctorate degree and following that be appointed to associate professor. Alongside her academic career, Professor Maathai was a vociferous environmental and political activist, and was central to Kenya's first multi-party elections in 1992, during which she strove for free and fair elections. She went on to serve in the Kenyan government as the Assistant Minister for the Department of Environment and Natural Resources from 2003 to 2007. Her deep connection with the environment led her to found a pan-African environmental organisation, the Green Belt Movement (GBM), whose goal was to reduce poverty and promote environmental conservation. She was a fierce advocate for women, serving on the National Council of Women of Kenya for 11 years and uplifting the lives of women through her work with the GBM. Her contributions were recognised in 2004 when she was awarded the Nobel Peace Prize. She continued her inspirational work, founding the Wangari Institute for Peace and Environmental Studies in partnership with the University of Nairobi, chairing global initiatives to protect African forest regions, and was appointed a United Nations Messenger of Peace on environmental and climate change. Wangari Maathai left a lasting impact on our continent and the world, and through this award we form a continuity of her spirit, and recognise the next generation of impactful Africans.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
" |
223 | 2,017 | "Kambule Doctoral Award - Deep Learning Indaba 2018" | "https://deeplearningindaba.com/2018/awards/kambule-doctoral-award" | "Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Awards Key Links Apply for Kambule Doctoral Award Submit supporting letters The Kambule dissertation award recognises and encourages excellence in research and writing by doctoral candidates at African universities, in any area of computational and statistical sciences.
The Kambule award celebrates African research excellence: its recipients are those that uphold Thamsanqa Kambule’s legacy as a defender of learning, a seeker of knowledge, and activist for equality.
The award will be presented at the annual Deep Learning Indaba in September 2018. We welcome nominations from both students themselves, and their supervisors and mentors.
The winner will be awarded a trophy, a cash prize of at least ZAR 10,000, will travel to speak at the Deep Learning Indaba in South Africa, and will travel to Oxford to participate in the Oxford-Africa Initiative’s Insaka meeting.
Eligibility PhD theses in the broad area of computational and statistical sciences will be eligible. This includes, but is not restricted to: machine learning, deep learning, artificial intelligence, statistics, probability, data science, information theory, econometrics, optimisation, statistical physics, biostatistics and bioinformatics, natural language processing, computer vision, and computational neuroscience.
The Awards Committee interprets the phrase “PhD thesis” to mean a dissertation in final form, i.e. approved by the student’s examinations board, e.g., viva examinations completed, public defence completed, corrected version submitted, or degree awarded.
The nominee must have been registered as a student and received their degree from an African university.
A dissertation may be nominated by the author, an academic who is in a position to comment on the merits of the work and the candidate (e.g., PhD supervisor, thesis examiner, academic mentor, collaborators), a department chair or head of department.
Theses completed during the period of 2015-2018 are eligible for nomination.
Nominations are welcomed from any African country.
The thesis can be in any language, although the Awards Committee may require English translations for full consideration of theses written in other languages.
All supporting letters and reports should be in English (or a translation supplied).
Next Deadline 30 April 2018, 11:59pm, Central African Time (CAT).
Selection Criteria Dissertations will be reviewed for technical depth and significance of the research contribution, potential impact on theory and practice, quality of presentation, and its role in strengthening African machine learning.
Submissions Nomination is made by completing an online nomination form, which also includes electronic submission of the dissertation. We recommended that dissertations be written in English (the Awards Committee may require an English translation for full consideration of theses written in other languages).
A supporting letter that describes the main theoretical, methodological, and/or applied contributions of the thesis .
This supporting letter should be written by an academic who is in a position to comment on the merits of the work and the candidate, e.g., PhD supervisor, thesis examiner, academic mentor, collaborators, etc.
The letter should be written by someone other than the person who is nominating the candidate.
This nominating letters should be written in English, and submitted electronically in pdf format, using the ‘submit supporting letter’ or this form.
If the examiners’ reports are available, these should also be submitted.
Further Questions and Contact If you have any questions, or need clarification on any part of this call for nominations, don’t hesitate to contact us at [email protected] In Honour of Dr Thamsanqa W. Kambule Dr Thamsanqa “Wilkie” Kambule, one of South Africa's greatest mathematicians and teachers, is remembered for his life’s contribution to education, specifically black education under the Bantu Education Act, a segregated education system enforced by the apartheid regime. Through his teaching, 20 years of which he was the principal of Orlando High School in Soweto, he went on to shape and influence many great minds, including Nobel Peace Prize winner Desmond Tutu, former national police commissioner Jackie Selebi, and former chief executive officer of the Independent Electoral Commission Pansy Tlakula. Alongside his passion for teaching, he was also a gifted mathematician. He was awarded honorary doctorate degrees from the universities of the Witwatersrand, Pretoria and Fort Hare, and was the University of the Witwatersrand’s first black professor in mathematics. In 2002, former South African president Thabo Mbeki bestowed him with an Order of the Baobab in gold for his exceptional contribution to mathematics, education, human development and community service, and in 2008 went on to be awarded an honorary membership of the Actuarial Society of South Africa, a membership he was denied during Apartheid. The Thamsanqa Kambule Doctoral Dissertation Award is established in honour of his contributions to the field of mathematics and his dedication to furthering the minds of his students.
Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
" |
224 | 2,017 | "Awards - Deep Learning Indaba 2018" | "https://deeplearningindaba.com/2018/awards" | "Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Indaba Posters Programme Speakers Sponsors Venue Women in Machine Learning IndabaX Awards Maathai Impact Award Kambule Doctoral Award Blog About Our Mission The Indaba Abantu Resources Reports Community Code of Conduct Videos and Slides Press and News Contact Us Other Editions 2017 2019 2020 2021 Awards Key Dates 30 April 2018 – Nominations close 30 April 2018 – Support letters due July 2018 – Awardees notified September 2018 – Awards given The Deep Learning Indaba is proud to establish two annual awards, the Kambule Award and the Maathai award, to recognise excellence in African research and applications. These awards celebrate those who show the strength of African machine learning, and we encourage as many applications as possible.
Thamsanqa Kambule Doctoral Dissertation Award The Kambule dissertation award recognises and encourages excellence in research and writing by doctoral candidates at African universities, in any area of computational and statistical sciences. The Kambule award celebrates African research excellence: its recipients are those that uphold Thamsanqa Kambule’s legacy as a defender of learning, a seeker of knowledge, and activist for equality. The award will be presented at the annual Deep Learning Indaba in September 2018. We welcome nominations from both students themselves, and their supervisors and mentors. The winner will be awarded a trophy and a cash prize of at least ZAR 10,000.
See the Kambule Award page for more details of eligibility and submission.
Wangari Maathai Impact Award The Maathai award encourages and recognises work by African innovators that shows impactful application of machine learning and artificial intelligence. This award reinforces the legacy of Wangari Maathai in acknowledging the capacity of individuals to be a positive force for change: by recognising ideas and initiatives that demonstrate that each of us, no matter how small, can make a difference. The award will be presented at the annual Deep Learning Indaba in September 2018. We welcome nominations from both individuals, teams and organisations. The winner will be awarded a trophy and a cash prize.
See the Kambule Award page for more details of eligibility and submission.
English French KiSwahili Arabic Contact us Deep Learning Indaba Strengthening African Machine Learning [email protected] Indaba 2021 IndabaX Awards Mentorship Blog About Copyright © Deep Learning Indaba 2017 - 2023. All Rights Reserved.
" |
225 | 2,023 | "Three Senior OpenAI Researchers Resign as Crisis Deepens — The Information" | "https://www.theinformation.com/articles/three-senior-openai-researchers-resign-as-crisis-deepens" | "Exclusive: OpenAI Co-Founder Altman Plans New Venture Subscribe and Read now Three Senior OpenAI Researchers Resign as Crisis Deepens [email protected] om Profile and archive → Follow Jon on Twitter and [email protected] om Profile and archive → Follow Amir on Twitter Three senior researchers at OpenAI resigned Friday night as the artificial intelligence developer suffered fallout from the firing of CEO Sam Altman and sudden resignation of President Greg Brockman, according to several people with knowledge of the situation.
Jakub Pachocki, the company’s director of research; Aleksander Madry, head of a team evaluating potential risks from AI, and Szymon Sidor, a seven-year researcher at the startup, told associates they had resigned, these people said. The departures are a sign of immense disappointment among some employees after the Altman ouster and underscore long-simmering divisions at the ChatGPT creator about AI 'safety' practices.
Join now to read the full story Exclusive Exclusive startups ai Exclusive ai Exclusive ai Exclusive venture capital Exclusive startups Finance The Briefing Get Started © 2013-2023 The Information. All Rights Reserved.
" |
226 | 2,023 | "Watch Tech Effects: How Video Games Impact You | Tech Effects | WIRED" | "https://www.wired.com/video/watch/tech-effects-how-video-games-impact-you" | "Open Navigation Menu To revisit this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revisit this article, select My Account, then View saved stories Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Tech Effects: How Video Games Impact You About Released on 10/24/2018 [gunshots] If you grew up playing video games like I did, you've probably heard lots of conflicting information.
Some say too much gaming will ruin your vision and rot your brain, while others claim that it improves your hand eye coordination, it can even make you smarter.
So, what exactly does gaming do to our brain and our body? To find out, I visited doctors and researchers.
We're seeing brain activity in different frequencies.
Tested my hand eye coordination against a pro gamer.
You can't catch up.
And somehow ended up in a sub-200 degree cryo chamber, all to answer the question: how do video games affect us? The stakes are higher than ever.
The industry is booming, Esports have gone mainstream, there are college leagues, parents are even getting video game tutors for their kids.
And thanks in part to smart phones, and free games like Fortnite, gamers are playing more than ever before.
So given that we can play virtually anywhere at any time, how is all this gaming changing us physically? Let's start with the excuse I used to give my mom when I was trying to get a little bit more time on the Atari.
It's making me a better athlete.
To find out if that's actually true, I headed to the sports academy in Thousand Oaks, California, where amateur gamers and Esports pros train under the same roof as traditional athletes.
This is pro gamer Matt Higgenbotham.
I'm known online as Acadian.
[Peter Rubin] Between training and casual gaming, Matt plays eight to ten hours a day.
People say, you know, it improves hand-eye coordination, it improves response time, what have you seen in your own life? If you only play League of Legends as like your only activity with no physical exercise, in my opinion, you're just going to get out of shape.
In terms of positives? Yeah, maybe cognitive it would increase the things you're going to use in the game: reacting to things quickly, making decisions quickly.
[Peter] So is he right?. Let's find out if being an avid gamer actually makes you sharper.
We're gonna be taking a bunch of cognitive tests, one after the other. Now, Matt is a pro gamer, I am very much not, so we're gonna see exactly how our results break down.
The first test is my new arch nemesis, the Dynavision board, which tests pure reaction time.
Your job is to hit the button when it lights up red, okay? It's gonna move pretty quickly.
Okay.
So you're gonna wanna rely on your periphery.
Okay, I can use either hand, right? You can use either hand, that's right.
This is gonna a mess I can already tell.
Now, Matt's calm. He's making it look easy, but this is way, way harder than it looks.
Am I not seeing one? Yep, down below, yeah there you go.
I just threw the whole test.
It's pretty fun actually.
Damn it! I'm gonna walk you over to the next test.
Okay, yeah, so lets leave this far behind.
I'll see you in hell, Dynaboard.
The next one tests what's called Cognitive Processing.
It's also a reaction test, but, unlike the Dynavision Board, there's a voice telling you to do the opposite of what you're actually supposed to do.
Okay, there's going to be a voice in your head that says stop or go, don't listen to that voice. Keep hitting green.
My brain! Yeah, it's tiring, right? Your body gets fatigued, and so does the brain.
That's crazy! The last test measures your ability to track multiple objects at the same time.
We had to keep tabs on certain spheres as they floated around in a 3D space. Kinda like trying to win two games of three card monte at the same time.
Four seven.
Three and five. Six and eight.
I got better at it after like eight of them.
No, they bounced off each other, no! [laughs] I don't know who lost them.
My confidence is shaken at this point.
Moment of truth [how are you], lets see how I did.
I hope you got some good news for me.
Of course, always. These tests are built to really push your cognitive processing but at the same time give you measurable results and immediate feedback. Matt out-preformed you in the more complex tests, so as tests got more complicated, and had a significant amount of distractions and opportunity for the brain to start thinking about something that wasn't primary to the task, he out-preformed you pretty significantly in those tests.
If we were to compare both your scores to a normal population, of which we have data, he's in the ninety-eighth percentile, and you're probably in the sixty or seventieth percentile.
So, are we talking about self selection here? Is it that people who are good at this stuff are playing games, or is there proof that games can actually improve your cognition in that way? No I think for sure games can help improve your cognition.
Playing video games can be very high speed, can create a lot of chaos, create a lot of multiple environments where you have to make decisions, and all of these are forming skills in the brain.
So no, I think in general, just like in every capacity of human performance, we all start with some baseline based on genetics, but the opportunity to train cognition I think it really powerful.
Okay, so a pro gamer who's twenty years younger than me beat me at a few cognitive tests.
I mean, of course he did. What does science have to say about all this? Video games is a hugely broad category, and we know for sure that the impact of a game has to do with what you're asked to do.
So because of that, different games will have different impacts on the brain.
You wouldn't ask, What's the impact of food on your body?.
You'd wanna know the composition of the food, right? And so the same is true of video games, so depending on what we would call the mechanics, the dynamics, the content of individual games, that is what would predict how the games will affect your brain.
[Peter] Action games like Counterstrike, Overwatch, and Fortnite are some of the most popular with consumers these days.
And Green and his colleagues look to games like those to find out what their impact is.
[Shawn] There are a sub-type of games, action games, that have been linked with positive effects in perceptual and cognitive skills.
These are games that have lots of fast motion in them, lots of objects to track simultaneously, and emphasis on peripheral processing, so items first come at the edges of the screen.
The need to make quick and accurate decisions under time pressure.
[Peter] Based on fifteen years worth of studies, researchers found that action games biggest positive effects were on perception, how our senses interpret external stimuli like sights and sounds, spatial cognition, which helps you coordinate yourself in and navigate 3D environments, and top down attention, the ability to focus on one object While ignoring distractions How far that generalizes, I think is a pretty open question, so my expectation is that there are plenty of people who show pretty exceptional hand-eye coordination with a joystick might not be able to catch a baseball very well.
So it's certainly the case that a perceptual motor skill development in one area won't necessarily generalize to all areas.
I'm curious about thoughts that you have about the thresh holds between benefits gained from action games and where those diminishing returns might kick in.
You will get more learning gain from smaller sessions spread out over time than one big block.
With respect to perceptual and cognitive skills, we've either seen a positive impact or a null impact.
We haven't seen any area that has been damaged, where there is worse performance.
[Peter] So those are the positive effects of playing action games. But what if you develop games that specifically harness those cognitive effects? That's exactly what researchers are attempting at UCSF's Neuroscape Lab.
Our goal is to bridge technology and neuroscience to improve the function of your brain.
The reason we focus on cognitive control is because we can look at it as the very, sort of, base of the pyramid that all other aspects of cognition like memory, reasoning, decision making, all the way up to things like wisdom are dependent upon it. If you can't pay attention, everything crumbles. You can't build any of the higher order cognitive abilities.
[Peter] Their custom designing games could one day be prescribed as a kind of digital medicine for patients with conditions like ADHD.
So, where pharmaceutical medicines deliver molecular treatment, we think of this medicine as a digital medicine that delivers experiential treatment.
The video game's essentially like our pill.
They hook me up with an EEG cap, so that I can see my brain activity in real time, while playing a steering game called Project Evo.
[inaudible] ... and we'll see your brain responding to it.
[Peter] And there are signs it's working.
[Adam] So there you go, you got it now.
That game is now in the SCA approval process to become the first ever prescribe able video game.
What we have frequently found is that we're able to get transferred benefits from game play to other aspects of attention that are very different than the game.
[Peter] Neuroscape is also experimenting with Virtual Reality. Because VR can utilize your whole body as a controller, it may well be able to compound the benefits for things like attention and memory.
A lot of data has shown that physical activity, even devoid of cognitive challenges, has positive benefits on your brain, especially the aging brain. So we ask the question, what happens if you give physical challenges that are integrated with cognitive challenge and create a sort of integrated approach? Will you have even more cognitive benefits if you're moving your entire body [inaudible] challenges as opposed to playing that same game just sitting there just moving your fingers, and we're testing that right now.
Now, despite your findings and despite the fact that you've been able to replicate this and you're in phase three trials, there doesn't seem to be consensus in the medical community. There are a lot of other scientists who say, Well no, I mean, any positives that you can derive from games are kind of mild and transitory at best, how do you respond to that? It's a complicated field and it's still early days.
I'm at least cautiously optimistic based on what we've seen over the last ten years that we're really onto something that's gonna be very positive of people using video games as therapeutic.
And if these games are prescribed one day to improve brain function, there are still questions about what the dosage should be.
It is important to make it fun, but it is also critical to think of it as something that's dosed and played for a limited time, and not interfering with the other important activities in your life.
Okay, now for the bad news.
Avid gaming can lead to injuries.
I see many people have repetitive motion injuries from gaming extensively. Many gamers will game from eight to sixteen hours a day six or seven days a week.
So my goal when I'm talking to them: find out how much they game, which games that they're playing, with their injuries. So injuries are the following; often, finger injuries, wrist injuries, elbow injuries, shoulder injuries, neck injuries.
It's the wide gamut of the human body, really.
[Peter] Doctor Harrison also sees something he calls Gamer's Thumb.
This is an issue whereby someone will have tendonitis, the back of their thumb, as well as on the volar aspect or palmar aspect of the thumb.
So they'll have pain on the back of the thumb and the front.
Now, that I've only seen in gamers.
When they present with that, they have really abused their bodies.
Their thumbs are really on fire.
When this bad boy is down, then you've got a problem.
So, I'm here, I'm your patient, I don't have big problems yet, but I want to prevent problems.
Let me show you, there's like five basic tricks, so you're gonna go down, and then bring your fingers up.
Feel that? Loosen up your joints as well as for your wrists.
Just start opening up everything and get everything moving really nice.
In and out with the thumb, and down.
This is one of the fundamental stretches that every gamer should do.
Console base, keyboard base, mouse, whatever, that is a thumb, you wanna have a healthy thumb.
You do them for five to ten minutes twice a day, not difficult.
I think video games are great, moderation is the key.
If you overdo it, then there are always issues that will be attached to that.
[Peter] Look, there's no question gaming can wear you out.
Some gamers at the Sports Academy even subject themselves to Cryo therapy after long sessions.
The jury's still out on their effectiveness, but some players swear by it.
So, I decided to give it a try.
Alright, so there's freezing cold gas, its dry.
You go through this fro two, two and a half, three minutes, when you come out, which I can only hope is gonna be sometime soon, when you come out and your body starts to warm up again, your blood then starts to recirculate and goes back out to your extremities, and the idea being that circulation feels amazing, and you go to the [inaudible].
That was two and a half minutes, I made it! So what have we learned here, other than the fact that I'm a masochist? Gaming can be good for your hand eye coordination and perception. It can help with focus, tension, maybe even memory.
Just how all that translates into the real world, though, it still up for debate.
We also know the repetitive gaming can take a tole on your body, so a little bit of moderation goes a long way.
When it comes to my own experience, I've played games more than thirty years, never suffered any gaming related injuries.
While I may never know if gaming helped my brain, I do know it didn't destroy it. So take that, Mom! [exit music playing] Tech Effects: How Video Games Impact You Tech Effects: How Photography Impacts You How Does Music Affect Our Brains & Our Bodies? New Xbox One - Kinect: Exclusive WIRED Video New Xbox One - TV Integration: Exclusive WIRED Video E3 Expo - Oculus Rift VR Headset 1080p Version Facebook and Oculus Want Your Head and Hands in Virtual Space The Untold Story of Magic Leap, the World's Most Secretive Startup AR, VR, MR: Making Sense of Magic Leap and the Future of Reality How This Guy Invents Crazy Skateboards For Custom Tricks Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
227 | 2,023 | "3 tips for using Google's Bard AI more effectively, according to Google itself | Mashable" | "https://mashable.com/article/google-bard-ai-chatbot-tips" | "Search @mouseenter="selectedIndex = index + '-' + key"> Search Result Tech Science Life Social Good Entertainment SHOP THE BEST Travel Product Reviews DEALS Newsletters VIDEOS > 3 tips for using Google's Bard AI chatbot more effectively, according to Google itself Share on Facebook Share on Twitter Share on Flipboard Google has tips for using Bard better.
Credit: Lorenzo Di Cola/NurPhoto via Getty Images An AI chatbot can be neat — a fun thing to play with — but perhaps you're not using it to its full potential. Google, however, recently published a blog post sharing tips for how to use its AI chatbot, Bard , more effectively.
The blog post is helpful in its entirety, but we've picked out three of the tips that might be the most helpful for users looking to get the most out their AI chatbot.
1. Analyze and create images Google suggested that users can upload a picture then use Bard to analyze or provide more info about that image. For instance, you can get Bard to create content like that image or parse through an image of handwritten notes and create a summary email.
2. Create code For this tip it might help to know some coding, but Bard can definitely be useful for coders. Google has a whole page devoted to Bard's coding abilities.
But Google also noted that Bard is particularly useful for explaining code snippets or "if you’re learning about programming for the first time, or you need more support to understand what a block of code might output." 3. Get help planning a trip Google says that Bard can help create an itinerary for a vacation based around your interests. The more detail you import that better job it should do a creating the trip that serve you best.
Google is looking into doling out AI-generated life advice Google Chrome will use generative AI to summarize articles School uses ChatGPT to determine which books are banned 3 tips for using Google Maps more effectively, according to Google Apple GPT: Tech giant reportedly working on a ChatGPT, generative AI competitor Tech in 2023 has pretty much been defined by the proliferation of AI chatbots, including Google's Bard, OpenAI's ChatGPT, and Microsoft Bing's chatbot. Mashable has broken down the pros and cons of each — but it's also important to remember that you don't want to share too much with an AI chatbot. They can be vulnerable to hackers and Google itself has warned its employees about sharing too much with AI chatbots.
Topics Artificial Intelligence Tim Marcin is a culture reporter at Mashable, where he writes about food, fitness, weird stuff on the internet, and, well, just about anything else. You can find him posting endlessly about Buffalo wings on Twitter at @ timmarcin.
Loading...
Subscribe TECH SCIENCE LIFE SOCIAL GOOD ENTERTAINMENT BEST PRODUCTS DEALS About Mashable Contact Us We're Hiring Newsletters Sitemap About Ziff Davis Privacy Policy Terms of Use Advertise Accessibility Do Not Sell My Personal Information AdChoices
" |
228 | 2,023 | "The 5 best AI chatbots of 2023 (so far) | Mashable" | "https://mashable.com/article/best-ai-chatbots-2023-list-chatgpt-bing-bard" | "Search @mouseenter="selectedIndex = index + '-' + key"> Search Result Tech Science Life Social Good Entertainment SHOP THE BEST Travel Product Reviews DEALS Newsletters VIDEOS > > The 5 best AI chatbots of 2023 (so far) Share on Facebook Share on Twitter Share on Flipboard There's lots of anthropomorphizing here.
Credit: Getty Images 2023, you're flying by. Join Mashable as we look back at everything that's delighted, amazed, or just confused us in 2023.
2023 is the year conversational AI went mainstream.
AI chatbots have been around for years, but they were mostly seen as gimmicky sideshows, ridiculed for giving unhinged responses or failing " the Nazi test.
" That is, until ChatGPT came along in November 2022.
The popularity of OpenAI's large language model caught the world's attention, and now everyone (especially OpenAI's competitors) is taking AI chatbots seriously. Tech giants like Microsoft and Google are devoting a massive amount of effort and resources to generative AI, introducing chatbots into their search engines and productivity tools. Thanks to public APIs, independent developers and startups suddenly have access to LLMs, allowing them to develop their own applications. There's now an abundance of conversational AI tools that are as good (or almost as good) as ChatGPT.
ChatGPT is noted for its accuracy, but like all chatbots, it still "hallucinates." Keep in mind when using any of these tools, that their responses could be inaccurate or not up-to-date.
Factoring in considerations like accuracy, user experience, availability, and uniqueness, we've rounded up and ranked the best AI chatbots out there. Take a look.
1. ChatGPT To quote the late, great Tina Turner, ChatGPT is "simply the best." It wasn't the first AI chatbot, but it was the first to package an LLM as advanced as this into an approachable, user-friendly chat interface. The paid version is powered by OpenAI's GPT-4, which is so intelligent, it passed the bar exam.
But what makes it so appealing is the human-like responses that are both conversational, yet informative. Now that ChatGPT comes with a Bing plugin, it can browse the internet, giving you up to date search results, whereas before its source of data was cut off after September 2021.
With a free account, you get access to GPT-3.5 and for a ChatGPT Plus subscription of $20 a month you get GPT-4.
How does ChatGPT stay so grounded despite its success? Credit: OpenAI 2. Bing Chat Thanks to a big investment in OpenAI, Microsoft's Bing Chat is just as advanced as ChatGPT. It uses the same LLM, GPT-4 and is completely free to use. One drawback is the amount of messages, or turns, per conversation which is currently limited to 30 per conversation of 300 per day. But that's still plenty for what most people are trying to accomplish.
Bing also seems to have more restrictive guardrails, which results in shorter responses or refusal to explore certain topics. After a weird conversation with New York Times reporter Kevin Roose, Microsoft may have reigned Bing in a bit. But if you want free access to a free chatbot powered by GPT-4, there's not much to complain about with Bing Chat.
For free access to GPT-4, Bing Chat is hard to beat.
Credit: Bing 3. Google Bard Google scrambled to release its own AI chatbot, Bard , following the surprise success of ChatGPT and subsequent Bing Chat. After a rocky start that was called "botched" and "rushed," Bard has almost caught up to its competitors.
As of Google's announcement at Google I/O, Bard is open to the public and now powered by PaLM 2 , the newest version of Google's LLM. While Bard is free to use, which is a major plus, it loses points for inaccuracy and responses that are less nuanced than ChatGPT and Bing.
Bard has to catching up to do with its competitors, but it's still one of the best out there.
Credit: Google 4. Character.AI While most AI chatbots focus on increasing productivity and providing helpful information, Character.AI leans into the science fiction of it all by simulating conversations with real or fictional characters. You may not ever get to meet Keanu Reeves, but with Character.AI, you could imagine what it would be like to chat with the John Wick actor.
As Mashable's Elizabeth de Luna pointed out in her explainer , this concept takes fan fiction to the next level where you can actually talk to simulations of beloved characters like Hermione Granger. Character.AI is powered by a proprietary technology trained on neural language models, and it's completely free.
Even within a fantasy conversation, de Luna ran into some issues with basic factual accuracy. For example, the Hermione character adamantly claimed to be in Hufflepuff, which any Harry Potter fan knows is untrue. But that doesn't seem to be a dealbreaker for some Character.AI users, who instead seem more concerned about its intentional limitations: 77,000 people have signed a petition to remove the chatbot's NSFW filter.
We get the petition to remove NSFW filters.
Credit: Character.AI 5. Replika Remember the movie Her where Joaquin Phoenix's character falls in love with his AI companion? Well, Replika is basically a real-life version of that. Using its proprietary LLM and " scripted dialog content ," Replika lets users create their own personal AI companion. Think of it as if you built a relationship with ChatGPT over time. Once you wrap your head around that, you're ready to learn about the numerous stories of people developing romantic relationships with their Replika avatars. That might sound " delulu ," but if you try out Replika for yourself, you'll see how easy it is to get attached.
As you might expect from an app designed to create intimate connections between humans and machines, Replika has faced controversies. In January, users reported instances of sexually aggressive interactions. When the company dialed down the sexual nature of conversations, other users were heartbroken by their AI companions that suddenly seemed cooler and distant.
Replika is free and also offers a paid version called Replika Pro for $15 a month.
Replika users have formed deep, meaningful relationships with their AI companions.
Credit: Replika Honorable mentions There are other AI chatbots that deserve a shoutout, but didn't make the cut for certain reasons.
Jasper is a popular one for work-related tasks. You write social media copy, ads, and blog posts, and with Jasper Chat, you can turn conversations into aforementioned marketing assets. But, Jasper only provides a 7-day free trial, with paid subscriptions starting at $50 a month after that.
Another AI chatbot we're keeping an eye on is Anthropic's Claude.
Anthropic was founded by former members of OpenAI, so they know their way around an LLM. Currently Claude is only available through select business partnerships like Slack and Zoom, so individuals don't have access yet.
Topics Artificial Intelligence ChatGPT Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on Twitter at @cecily_mauran.
Loading...
Subscribe TECH SCIENCE LIFE SOCIAL GOOD ENTERTAINMENT BEST PRODUCTS DEALS About Mashable Contact Us We're Hiring Newsletters Sitemap About Ziff Davis Privacy Policy Terms of Use Advertise Accessibility Do Not Sell My Personal Information AdChoices
" |
229 | 2,023 | "OpenAI's Sam Altman breaks silence on AI executive order | Mashable" | "https://mashable.com/article/openai-response-ai-executive-order-silence" | "Search @mouseenter="selectedIndex = index + '-' + key"> Search Result Tech Science Life Social Good Entertainment SHOP THE BEST Travel Product Reviews DEALS Newsletters VIDEOS > OpenAI's Sam Altman breaks silence on AI executive order Share on Facebook Share on Twitter Share on Flipboard Nothing but crickets from the AI giant.
Credit: Mashable composite; Getty / Kevin Dietsch / Staff, iStock / Getty Images Plus Update : Hours after this story published, Sam Altman posted on X/Twitter saying, "there are some great parts about the AI EO, but as the govt implements it, it will be important not to slow down innovation by smaller companies/research teams." Altman also said he is "pro-regulation on frontier systems," or large-scale foundation models, and "against regulatory capture." In the wake of President Biden's executive order on Monday, AI companies and industry leaders have weighed in on this watershed moment in AI regulation. But the biggest player in the AI space, OpenAI, has been conspicuously quiet.
The Biden-Harris administration's far-ranging executive order addressing the risks of AI builds upon voluntary commitments secured by 15 leading AI companies. OpenAI was among the first batch of companies to promise the White House safe, secure, and trustworthy development of its AI tools. Yet the company hasn't issued any statement on its website or X (formerly known as Twitter). CEO Sam Altman, who regularly shares OpenAI news on X, hasn't posted anything either.
OpenAI has not responded to Mashable's request for comment.
SEE ALSO: Of the 15 companies that made a voluntary commitment to the Biden Administration, the following have made public statements, and all of which expressed support for the executive order: Adobe, Amazon, Anthropic, Google , IBM, Microsoft , Salesforce, and Scale AI. Nvidia decline to comment.
In addition to crickets from OpenAI, Mashable has yet to hear from Cohere, Inflection, Meta, Palantir, and Stability AI. But OpenAI and Altman's publicity tour proclaiming the urgent risks of AI and the need for regulation makes the company's silence all the more noticeable.
Altman has been vocal about the threat that generative AI made by his own company poses. In May, Altman, along with technology pioneers Geoffrey Hinton and Bill Gates signed an open letter, stating, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." At a senate hearing in May, Altman expressed the need for AI regulation : "I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that," said Altman in response to inquiry from Sen. Blumenthal, D-CT about the threat of superhuman machine intelligence.
So far, cooperation with lawmakers and world leaders has worked in OpenAI's favor. Altman participated in the Senate's bipartisan closed-door AI summit, giving OpenAI a seat at the table for formulating AI legislation. Shortly after Altman's testimony, leaked documents from OpenAI showed the company lobbying for weaker regulation in the European Union.
It's unclear where OpenAI stands on the executive order, but open-source advocates say the company already has too much lobbying influence. On Wednesday, the same day as the AI Safety Summit in the U.K., more than 70 AI leaders issued a joint statement calling for a more transparent approach to AI regulation. "The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst," said the statement.
Meta Chief AI Scientist Yann LeCun, one of the signatories, doubled down on this sentiment on X (formerly known as Twitter) by calling out OpenAI, DeepMind (a subsidiary of Google), and Anthropic for using fear-mongering to ensure favorable outcomes. "[Sam] Altman, [Demis] Hassabis, and [Dario] Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry," he posted.
Anthropic and Google leadership have both provided statements supporting the executive order, leaving OpenAI the lone company accused of regulatory capture yet to issue any comment.
What could the executive order mean for OpenAI? Many of the testing provisions in the EO relate to huge foundation models not yet on the market and future development of AI systems, suggesting consumer-facing tools like OpenAI's ChatGPT won't be impacted much.
"I don't think we're likely to see any immediate changes to any of the generative AI tools available to consumers," said Jake Williams, former US National Security Agency (NSA) hacker and Faculty member at IANS Research.
"OpenAI, Google, and others are definitely training foundation models and those are specifically called out in the EO if they might impact national security." So, whatever OpenAI is working on might be subjected to government testing.
In terms of how the executive order might impact directly OpenAI, Beth Simone Noveck, director of the Burnes Center for Social Change, said it could slow down the pace of new products and updates being released and companies will have to invest more in research and development and compliance.
"Companies developing large-scale language models (e.g. ChatGPT, Bard and those trained on billions of parameters of data) will be required to provide ongoing information to the federal government, including details of how they test their platforms," said Noveck, who previously served as the first United States Deputy Chief Technology Officer under President Obama.
More than anything, the executive order signals an alignment with growing consumer expectations for greater control and protection of their personal data, said Avani Desai , CEO of Schellman, a top CPA firm that specializes in IT audit and cybersecurity.
"This is a huge win for privacy advocates as the transparency and data privacy measures can boost user confidence in AI-powered products and services," Desai said.
So while the consequences of the executive order may not be immediate, it squarely applies to OpenAI's tools and practices. You'd think OpenAI might have something to say about that.
Topics Artificial Intelligence OpenAI Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on Twitter at @cecily_mauran.
Loading...
Subscribe TECH SCIENCE LIFE SOCIAL GOOD ENTERTAINMENT BEST PRODUCTS DEALS About Mashable Contact Us We're Hiring Newsletters Sitemap About Ziff Davis Privacy Policy Terms of Use Advertise Accessibility Do Not Sell My Personal Information AdChoices
" |
230 | 2,023 | "Forget Ray-Ban Meta smart glasses. We tested cheaper ones that support ChatGPT. | Mashable" | "https://mashable.com/article/solos-airgo3-smartglasses" | "Search @mouseenter="selectedIndex = index + '-' + key"> Search Result Tech Science Life Social Good Entertainment SHOP THE BEST Travel Product Reviews DEALS Newsletters VIDEOS > Forget Ray-Ban Meta smart glasses. We tested cheaper ones that support ChatGPT.
Share on Facebook Share on Twitter Share on Flipboard My colleague TJ Fink demos the AirGo 3 smart glasses Credit: Kimberly Gedeon / Mashable "Pfft! I bet this is yet another pair of music-playing smart glasses," I yawned when I saw the AirGo 3 spectacles on display at CES 2023.
I'll admit I was a little jaded. At the annual Las Vegas tech trade show, Paula Abdul was hawking her own pair of audio-emitting smart glasses, and I was unimpressed. They were fragile, uninspired, and quite frankly, plain ol' boring.
However, I'm grateful that I didn’t let Abdul’s disappointing smart glasses stop me from trying on another pair at CES – one that is pushed by a company called Solos.
Yes, it plays music, but my jaw dropped at how it reproduces audio (more on this later). Eight months later, the spectacles have changed significantly since I last saw them. There's even a new competitor on the market: Ray-Ban Meta smart glasses.
But I don't think Solos has much to worry about.
SEE ALSO: Yes, these smart glasses support ChatGPT AirGo 3's line of smart glasses has two offerings that run on ChatGPT – SolosChat and SolosTranslate – that you can access via the Solos AirGo companion app. SolosChat lets you press and hold an action button on the smart glasses' right temple to ask ChatGPT a question. In response, you’ll see the reply populate before your eyes in the app, and you can hear the answer read out to you, too.
SolosTranslate is another ChatGPT-supported service, which is probably my favorite as someone who loves to learn new languages. As an English speaker, I can, again, press and hold the action button and say anything out loud. Next, I'll get translation in the app for one of nine supported languages, including French, Spanish, Japanese, Korean, Mandarin, and Cantonese.
Me wearing AirGo 3 smart glasses Credit: Kimberly Gedeon/Mashable I had a blast playing with Solos Translate. It transcribed my speech in the app and translated it to my language of choice in seconds. Dear reader, this is gold. I played Vogue’s YouTube episode of “El Bolso de Jennifer Lopez' ' on my laptop, and the ChatGPT-supported smart glasses transcribed Lopez’s speech before rolling out an accurate and precise Spanish-to-English translation. I also used these glasses to watch Univison’s Enamorándonos, a Spanish-language dating TV show. I sometimes get lost in what’s going on, but thanks to these smart glasses, I can translate conversations I don’t understand.
Music sounds divine on the AirGo 3 smart glasses I own a pair of AirPods Pro and they drive me up the wall. They slip out of my ears so often while I’m commuting, I’m surprised they haven't dropped onto the subway train tracks yet. I also have a pair of Sony’s WH1000XM4 headphones, and while they sound divine and sit comfortably on my head, they’re not the most portable.
My colleague TJ Fink demos the AirGo 3 smart glasses Credit: Kimberly Gedeon/Mashable The AirGo 3 smart glasses somewhat rectifies the issues I face with the headphones I own. I can listen to music on these bad boys without worrying about a wayward earbud falling out while I walk (ahem, I’m looking at you AirPods Pro), but I can easily fold it and stuff it into my bag’s front pocket (whereas my Sony XM4’s are too bulky to fit inside).
Solos claims that its smartglasses deliver high-quality spatial audio, and they’re spot on. I fired up Spotify on the AirGo 3s and launched the Hot Hits USA playlist. I listened to Dua Lipa’s “Dance the Night,” and the snappy tune sounded crisp, punchy, and clear. There are three sound profiles you can choose from in the Solos AirGo 3 companion app: Balanced, Dynamic and Relaxed. Balanced sounded the most honeyed to me, but according to the app, Dynamic is ideal for hip-hop while Relaxed is optimized for jazz and classical music. I can also take calls with the AirGo 3s. When I tested this feature myself, the person on the other end sounded pristine; they also told me that they heard me “loud and clear.” Designed like thick-armed reading glasses The review unit Solos sent me are black – just how I like ‘em. They’re lightweight, and from the front, they look like any other pair of reading glasses. However, the smart glasses’ arms are much thicker than your typical pair of spectacles, making room for all the tech that makes it, well, smart.
The thick arms abruptly transition into ultra-thin temple tips, giving them an unorthodox appearance. The temples look strange, but the average Joe wouldn’t know they’re smart glasses – just an, er, interesting style choice.
The temples are actually removable, so you can swap the frames for other options in the AirGo 3 lineup.
AirGo 3 smart glasses Credit: Solos For example, you can grab one of these reddish-orange, reflective, single-lens shades that are reminiscent of snowboarding goggles.
AirGo 3 smart glasses Credit: Solos If you wear glasses, don't worry. You can swap the lenses with prescription ones.
Setup is easier than I thought it’d be After charging the AirGo 3 smart glasses, I long-pressed the button on the right temple for two seconds to turn it on. I then heard a voice that told me the battery status (e.g., "Power high"). For Bluetooth pairing, I pressed the same button for five seconds and connected it to my Samsung Galaxy S22 Ultra.
I then downloaded the Solos AirGo companion app via the Google Play Store, which guides the user through the setup process as well as providing a mini tutorial on how to use the touch-activated controls. The basics are the following: Meta AI, AI celebrities, and everything else 'AI-ified' at Meta Connect 2023 Now Xiaomi's got smart glasses too, and they have a built-in display CES 2023: The future of Metaverse and VR depends on these glasses-free 3D displays Single tap to play/pause Long press to skip an audio track Double press to go to previous audio track Swipe left/right for volume up and down Slide finger on touch sensor toward ears to accept calls Slide finger on touch sensor away from ears to decline calls Most touch controls were smooth and seamless, but the play/pause function (e.g., a single tap anywhere on the temples) did not respond. Keep in mind, however, that the AirGo 3 smart glasses I have are a prototype. An acquaintance who also has a pair had no issues with single tapping to play and pause his music tracks. Solos will be sending me a review unit soon. Once I get my hands on it, I'll update this article with my play/pause experience.
How are the AirGo 3s different from Ray-Ban Meta smart glasses? Before we dive into how the AirGo 3s differ from the Ray-Ban Meta smart glasses, let's talk about how they're similar. They both support some form of AI. AirGo 3s, as mentioned, support ChatGPT, allowing you to ask it almost any question you want, from the best way to reheat a pizza to which countries were under the Roman Empire's rule. The Ray-Ban Meta smart glasses use Meta AI, the social media tech giant's newly debuted ChatGPT rival, which will become more sophisticated next year with a free software update. This upgrade will allow the Ray-Ban Meta smart glasses to "understand" what you're looking at. For example, if you want to know the name of the building you're standing in front of, it will tell you. Meta also claims that it can translate signs and menus for you, too.
AirGo 3 smart glasses Credit: Solos They both also allow you to take calls and play music. However, where they differ is that the Ray-Ban Meta smart glasses can capture video and photos; the AirGo 3s do not. It's also worth noting that the AirGo 3s' companion app has a lot more to play with. For example, you can track metrics like steps, calories, and more in the app, making it excellent as a workout companion. And don't worry about sweat. It has an IP67 waterproof rating, making it better than the Ray-Ban Meta smart glasses' IPX4 rating.
Plus, the Ray-Ban Meta smart glasses reportedly lasts up to four hours on a single charge. Solos claims that it lasts about double that (up to 10 hours), which matches my personal experience with the spectacles.
Final thoughts The AirGo 3s are the most sophisticated, cutting-edge tech product I’ve tested this year. You mean to tell me I can access ChatGPT, which can spew out answers to any questions I ask and translate foreign speech into my own, directly on my face? How wild is that? Plus, the spatial audio that emanates from these spectacles is surprisingly crisp, tickling my ear with high-quality sound. It may not match my Sony XM4’s in quality (and it’s not designed to), but its portability makes it a feasible alternative for my early-morning commute.
Save for the single tap foible, all touch controls are responsive, seamless and intuitive. Move over, Ray-Ban Meta smart glasses. There's a new pair of spectacles ready to snatch your crown. The AirGo 3 smart glasses start at $199 (much cheaper than its rival's $299 price tag) and you can get 'em at Solosglasses.com.
Topics Artificial Intelligence ChatGPT Kimberly Gedeon is a tech explorer who enjoys doing deep dives into the most popular gadgets, from the latest iPhones to the most immersive VR headsets. She's drawn to strange, avant-garde, bizarre tech, whether it's a 3D laptop, a gaming rig that can transform into a briefcase, or smart glasses that can capture video. Her journalism career kicked off about a decade ago at MadameNoire where she covered tech and business before landing as a tech editor at Laptop Mag in 2020.
Loading...
Subscribe TECH SCIENCE LIFE SOCIAL GOOD ENTERTAINMENT BEST PRODUCTS DEALS About Mashable Contact Us We're Hiring Newsletters Sitemap About Ziff Davis Privacy Policy Terms of Use Advertise Accessibility Do Not Sell My Personal Information AdChoices
" |
231 | 2,023 | "Microsoft Surface Laptop Go 3 review: The keyboard makes me want to cry | Mashable" | "https://mashable.com/review/microsoft-surface-laptop-go-3" | "Search @mouseenter="selectedIndex = index + '-' + key"> Search Result Tech Science Life Social Good Entertainment SHOP THE BEST Travel Product Reviews DEALS Newsletters VIDEOS > > Microsoft Surface Laptop Go 3 review: The keyboard makes me want to cry Share on Facebook Share on Twitter Share on Flipboard Microsoft Surface Laptop Go 3 Credit: Kimberly Gedeon/Mashable Mashable Score 4 Wow Factor 4 User Friendliness 5 Performance 3.5 Bang for the Buck 3.5 Elite typing experience Super portable Excellent build quality Sleek design Battery life could be better No backlit keyboard As you read this, dear reader, you should know that I have typed this entire review on the Microsoft Surface Laptop Go 3. Now, do I usually do this? No. Why? Because keyboards typically don’t beat the one on my 14-inch MacBook Pro.
However, the Microsoft Surface Laptop Go 3 is different. The keyboard on this 12.5-inch notebook is one of the best I’ve ever used during my entire career as a laptop reviewer. I’ll dive into this later because, of course, this is the intro – I have to tease you a little before I reveal more.
One thing you need to understand about the Surface Laptop Go 3, though, is that it’s not for everyone. If you need a laptop with au courant specs that can handle CPU and GPU intensive tasks, this ain’t for you. However, if you’re a student who needs a snappy keyboard that can keep up with you while typing notes – this is "the one".
Microsoft Surface Laptop Go 3 price and specs Microsoft Surface Laptop Go 3 on a table Credit: Kimberly Gedeon/Mashable Microsoft sent me the Surface Laptop Go 3 with the following specs: 12th Gen Intel Core i5-1235U CPU Intel Iris Xe graphics (integrated with the CPU) 16GB of RAM 256GB of SSD storage 12.4-inch touchscreen Windows 11 Home This bad boy costs $999, but there’s a cheaper option. There’s a variant that costs $799, which downgrades your specs to 8GB of RAM and a 256GB SSD. You can buy it here at Microsoft.com.
What I love about the Surface Laptop Go 3 The Surface Laptop Go 3, the successor to the Laptop Go 2, is one sleek, stylish laptop. However, nothing grinds my gears more than when those sophisticated vibes get ruined by a smudgy chassis. Luckily, the Surface Laptop Go 3 is a grime repellent.
The design is chef’s kiss One thing I absolutely adore about this laptop is that it doesn’t attract fingerprints. As much as I love my 15-inch MacBook Air, it’d betray me to criminal investigators in a heartbeat.
Surface Laptop Go 3 review Credit: Kimberly Gedeon/Mashable For a notebook that starts at $800, the Surface Laptop Go 3 feels like a $1,200 laptop. The build quality is excellent. Applying pressure to the deck yields little to no give, and the keys don’t have that gross plasticky feel of other laptops in this budget. This isn’t to say that there isn’t plastic in this laptop at all. The base, for example, blends aluminum and recycled plastic in its material. The lid, on the other hand, is made of anodized aluminum. Microsoft was strategic in where it placed its cheaper materials in the laptop. After all, we rarely touch the laptop’s underside.
The Surface Laptop Go 3 I have comes in Sage and it is stunning. It just has a hint of green, giving it a seaweed-esque tint. The reflective Microsoft logo adds to the laptop’s subtle sophistication; it has a professional, polished, classic appearance. The Surface Laptop Go 3 also comes in Sandstone, Platinum, and Ice Blue. No, the Surface Laptop Go 3 does not come in black. Microsoft says it did research and development, and people didn’t care for black. And I agree. Black is boring! Overall, the Surface Laptop Go 3 has a sturdy, well-built design for a sub-$800 laptop, but it may not appeal to those who have larger hands. It may feel too cramped.
The keyboard is one of the best I’ve ever used Snap, click and clack! This keyboard is all that! When I tested the keyboard at the Surface event in late September, my eyes widened in surprise. This keyboard is one of the snappiest that I’ve ever tested. The tactile feedback is top notch. My fingers leap from one letter to another like a graceful piano player. It’s as if the keys immediately bounce back after hitting its point of actuation, making my fingers fly off to the next letter quickly and efficiently.
Surface Laptop Go 3 Credit: Kimberly Gedeon/Mashable The keys are perfectly spaced and the contrast of the shale-gray keys and the white symbols make the letters easy to see.
I’m a keyboard snob, so take it from me when I say this island-style keyboard is exquisite.
On one of my daily driver laptops, the 15-inch MacBook Air, I hit 83 words per minute with an accuracy rate of 98% on the Live Chat’s typing speed test.
I tried the same test on the Surface Laptop Go 3, and I hit 87 words per minute with the same accuracy rate.
The power button doubles as a fingerprint reader If you love having a passwordless sign-in with Windows Hello, you’ll likely be elated to know that the Surface Laptop Go 3 has a fingerprint reader. No, it’s not a dedicated fingerprint reader – it’s integrated into the power button. However, I had a seamless, smooth experience with the fingerprint sensor setup.
What I dislike about the Surface Laptop Go 3 One of the compromises you’ll have to settle with is that the Surface Laptop Go 3 is packed with an old-generation Intel Core i5 processor. In other words, the CPU inside the Surface Laptop Go 3 ain’t hot anymore. It’s got a 12th Gen processor while most laptops are rocking more au courant 13th Gen CPUs.
Performance is middle-of-the-road The Surface Laptop Go 3 is packed with an Intel Core i5-1235U. The “U” stands for ultra-low power, which implies that it’s not a beastly chip by any means, but it’s designed to be energy efficient. Still, the performance on the Surface Laptop Go 3 should be sufficient enough for its intended audience: students and on-the-go busy bees who need a laptop for casual usage. If you plan to simply browse the web, use some productivity apps, and stream your favorite shows, I found that the Surface Laptop Go 3 managed this workflow just fine.
Surface Laptop Go 3 Credit: Kimberly Gedeon/Mashable However, if you run tasks that test the limits of the CPU and GPU, consider getting the Surface Laptop Studio 2 instead (if you insist on getting a Microsoft laptop).
My gripe with the Surface Laptop Go 3 isn’t its middling performance per se – it’s its performance compared to other similarly priced laptops that gives me pause. On the Geekbench 6 benchmark, the Surface Laptop Go 3 delivered a score of 6,043. Based on my prior testing experience, you can squeeze out more performance from the likes of the MSI Prestige 14 Evo and the Asus ZenBook 14 , which are packed with 12th-gen Intel Core i5-1240P processors – and they’re both under $800. But keep in mind that you won’t get the same premium build quality from these two laptops as the Surface Laptop Go 3.
No backlit keyboard In dark areas or low-lit rooms, you may find it difficult to see the keyboard because, unfortunately, the Surface Laptop Go 3 does not have any backlighting.
Thick bezels The Surface Laptop Go 3 comes with a touchscreen, which I don’t use often. (I’m afraid to leave fingerprint smears on the display.) However, whether I was pinching the panel to zoom in or dragging it down to scroll, the 12.4-inch notebook zippily responded to my touch gestures Microsoft Surface Laptop Go 3 Credit: Kimberly Gedeon/Mashable I watched the “Argylle” trailer on YouTube to see how I like the display, and it’s not half bad. No, it won’t win any awards (e.g., it’s not full of rich color), but it’s crisp enough that I could spot the tiny, subtle freckles that populate Bryce Dallas Howard’s eyes and nose. What I could do without, however, are the thick bezels – and the bottom bezel is even larger. It’s so 2018! What’s ‘eh’ about the Surface Laptop Go 3? Sometimes, there are aspects about laptops that I neither love nor dislike – I’m just indifferent to them. In the Surface Laptop Go 3’s case, it’s the touchpad. It’s simply OK.
It’s a decent mix of resistant and smooth, plus it responds well to Windows 11 gestures like pinch-to-zoom and two-finger scrolling. The only thing that I don’t like is, unlike the chassis, it attracts fingerprints.
Mix of legacy and modern ports Sadly, the Surface Laptop Go 3 doesn’t come with a Thunderbolt 4 port (which means you won’t get ultra-fast transfer rates), but it offers one USB Type-A port, a USB Type-C port, and a 3.5mm headphone jack. Still, you can use the USB-C port to charge the Surface Laptop Go 3; you can also use it to connect to an external display.
Surface Laptop Go 3 Credit: Kimberly Gedeon/Mashable On the right side of the Surface Laptop Go 3, you’ll find Microsoft’s proprietary Surface Connector port, allowing you to charge the notebook while still keeping your USB-C port free for other peripherals.
Microsoft Surface Laptop Go 3 battery life On the PCMark 10 battery life test, with the Surface Laptop Go 3’s brightness set to the max, the Surface Laptop Go 3 lasted 7 hours and 51 minutes. Keep in mind that, according to my experience, the average laptop in this price range lasts between 9 and 10 hours.
Microsoft Surface Laptop Go 3 webcam The Surface Laptop Go 3 comes with a 720p HD webcam. Videoconferencing on this shooter will make you look like a Georgia O'Keeffe painting (and not in a good way) – I hope the people on the other end love watercolor art.
Webcam photo on Microsoft Surface Laptop Go 3 Credit: Kimberly Gedeon/Mashable Final thoughts This laptop is tiny. If Jason Momoa carried this laptop in his hands, I’m pretty sure it’d look like the Surface Duo (a foldable phone Microsoft released in 2020). To put things into perspective, the 12.9-inch Apple iPad Pro has more screen real estate than the display on this 12.4-inch machine – and it’s a tablet.
However, for those who prefer compact, portable laptops that can easily slip into their backpacks and travel bags, the Surface Laptop Go 3 should be on your shortlist. I can see this being the daily driver of young students or Gen Z users seeking a laptop that can handle casual usage with style and sophistication.
And most importantly, if you’re seeking a sub-$800 laptop with arguably the best keyboard on the market (in this price range), this Surface Laptop Go 3 is the one for you.
Opens in a new window Credit: Kimberly Gedeon/Mashable Topics Microsoft Kimberly Gedeon is a tech explorer who enjoys doing deep dives into the most popular gadgets, from the latest iPhones to the most immersive VR headsets. She's drawn to strange, avant-garde, bizarre tech, whether it's a 3D laptop, a gaming rig that can transform into a briefcase, or smart glasses that can capture video. Her journalism career kicked off about a decade ago at MadameNoire where she covered tech and business before landing as a tech editor at Laptop Mag in 2020.
Black Friday Deals Latest Black Friday Fire Tablets are so cheap right now you can start using them as coasters Amazon just dropped a bunch of laptop deals for Black Friday — here are the best ones Get Apple AirPods Max for nearly $100 off during Amazon's Black Friday sale Tracking the best Black Friday Roomba deals live at Amazon Score an iPad (10th gen) for its lowest price ever The pizza oven of your dreams is on sale for Black Friday Get a 16GB Kindle Scribe at it lowest price at Amazon for Black Friday Shop the 30+ best Black Friday gaming keyboard deals Amazon has the 44mm Apple Watch SE on sale for the same price as the 40mm SE These Dyson vacuums, hair tools, and air purifiers are already on sale ahead of Black Friday The new Steam Deck OLED is officially up for grabs Nearly all Vitamix blenders are on sale ahead of Black Friday Amazon brought back invite-only deals for Black Friday: Here's what's worth getting Save $50 on a Meta Quest 2, plus more Black Friday Meta Quest deals All of the self-emptying robot vacuums already on sale ahead of Black Friday The Best Black Friday Deals The Series 9 is down to a record-low price in multiple configurations, plus more of the best Black Friday Apple Watch deals Black Friday sales are already live. Here are the best early deals we've found so far.
Best Black Friday AirPod deals: The AirPods Max are $99 off Amazon just dropped a ton of Echo device deals during its Black Friday sale 55+ of the best early Black Friday Apple deals Amazon's Black Friday deals are here and they're not to be missed The best gaming deals of Black Friday 2023 include your favorite games, accessories, and more The M3 MacBooks just got their first discount, plus more Mac and MacBook sales ahead of Black Friday All of the robot vacuums already on sale ahead of Black Friday Black Friday deals on a ton of Samsung and LG TVs are already live Score early Black Friday deals on cordless and robot vacuums from Shark, Dyson, and more Black Friday News & Tips Using Affirm on Amazon: How to buy now, pay later this Black Friday How to contact Amazon's customer service during Black Friday 8 Black Friday shopping tips to keep the chaos to a minimum How to use Apple Pay on Amazon for Black Friday shopping Black Friday Participating Stores Best Buy's Black Friday event is here — these are the best deals from the sale Best Buy Drops will tell you about high-profile product launches, limited edition bundles, and deals before they drop Kohl's Black Friday Early Access Event is in full swing — here are the best deals to shop Target's Black Friday sale has started and is running the entire month of November Here's a sneak peek at 60+ Black Friday deals that Walmart is dropping next week Loading...
Subscribe TECH SCIENCE LIFE SOCIAL GOOD ENTERTAINMENT BEST PRODUCTS DEALS About Mashable Contact Us We're Hiring Newsletters Sitemap About Ziff Davis Privacy Policy Terms of Use Advertise Accessibility Do Not Sell My Personal Information AdChoices
" |
233 | 2,023 | "AI poses ‘risk of extinction,’ warn European tech luminaries" | "https://thenextweb.com/news/ai-poses-risk-of-extinction-warn-european-tech-luminaries" | "Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on May 30, 2023 Deep tech AI poses ‘risk of extinction,’ warn European tech luminaries The cautions keep on coming Some of Europe’s top technologists today joined a global group of IT luminaries in warning that AI could lead to extinction.
At just 22 words long, their statement is short and stark: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Issued by the non-profit Center for AI Safety, the message has been signed by an array of business leaders, researchers, and public figures. They include Sam Altman, the CEO of OpenAI, Kevin Scott, the CTO of Microsoft, and, err, the musician Grimes. Her ex-boyfriend, Elon Musk, however, was a notable absentee, despite his long track record of raising concerns about the field.
Get your ticket NOW for TNW Conference - Super Earlybird is 90% sold out! Unleash innovation, connect with thousands of tech lovers and shape the future on June 20-21, 2024.
A sizeable proportion of the signatories come from Europe. Among them are Demis Hassabis, the London-born CEO of Google DeepMind, Kersti Kaljulaid, the former president of Estonia, and Geoffrey Hinton, a British Turing Award-winner who recently quit Google to talk about AI’s dangers.
The statement joins a bevvy of recent alarm bells about the existential threats posed by AI.
In the last two months alone, industry leaders have called for the training of powerful AI systems to be suspended amid fears of threats to humanity; healthcare professionals have demanded a pause on developing artificial general intelligence; Musk has warned AI could cause “civilisation destruction,” and Google boss Sundar Pichai has admitted that the dangers “keep [him] up at night.” Cynics, however, may note that many figures sounding the alarm are also resisting any AI regulations that could adversely impact their businesses.
Story by Thomas Macaulay Senior reporter Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with tech Artificial intelligence Europe Story by Thomas Macaulay Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles ecosystems How Valencia’s fast-growing startup ecosystem is thriving sustainability Climate tech is set to boom. This VC explains why it’s ripe for investment Join TNW All Access Watch videos of our inspiring talks for free → deep tech From AI to fusion power: The next 10 startups in Intel’s European deep tech accelerator deep tech UK invests £225M to create one of world’s most powerful AI supercomputers The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
" |
234 | 2,023 | "AI 'godfather’ quits Google and warns of dangers ahead" | "https://thenextweb.com/news/ai-godfather-quits-google-warns-of-dangers-ahead" | "Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on May 2, 2023 Deep tech AI ‘godfather’ quits Google and warns of dangers ahead Meanwhile, the EU is moving closer to its landmark AI Act Dr Geoffrey Hinton, widely referred to as AI’s “godfather,” has confirmed in an interview with the New York Times that he has quit his job at Google — to talk about the dangers of the technology he helped develop.
Hinton’s pioneering work in neural networks — for which he won the Turing award in 2018 alongside two other university professors — laid the foundations for the current advancement of generative AI.
The lifelong academic and computer scientist joined Google in 2013, after the tech giant spent $44m to acquire a company founded by Hinton and two of his students, Ilya Sutskever (now chief scientist at OpenAI ) and Alex Krishevsky. Their neural network system ultimately led to the creation of ChatGPT and Google Bard.
But Hinton has come to partly regret his life’s work, as he told the NYT. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” he said. He decided to leave Google so that he could speak freely about the dangers of AI and ensure that his warnings don’t impact the company itself.
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023 According to the interview, Hinton was prompted by Microsoft’s integration of ChatGPT into its Bing search engine, which he fears will drive tech giants into a potentially unstoppable competition. This could result in an overflow of fake photos, videos, and texts to the extent that an average person won’t be able to “tell what’s true anymore.” But apart from misinformation, Hinton also voiced concerns about AI’s potential to eliminate jobs and even write and run its own code, as it’s seemingly capable of becoming smarter than humans much earlier than expected.
The more companies improve artificial intelligence without control, the more dangerous it becomes, Hinton believes. “Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” The need to control AI development Geoffry Hinton isn’t alone in expressing fears over AI’s rapid and uncontrolled development.
In late March, more than 2,000 industry experts and executives in North America signed an open letter, calling for a six-month pause in the training of systems more powerful than GPT-4, ChatGPT’s successor.
The signees — including researchers at DeepMind, computer scientist Yoshua Bengio, and Elon Musk — emphasised the need for regulatory policies, cautioning that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” Across the Atlantic, ChatGPT’s growth has stirred the efforts of EU and national authorities to efficiently regulate AI’s development without stifling innovation.
Individual member states are trying to oversee the operation of advanced models. For instance, Spain, France, and Italy have opened investigations into ChatGPT over data privacy concerns — with the latter being the first Western country to regulate its use after imposing a temporary ban of the service.
The union as a whole is also moving closer to the adoption of the anticipated AI Act — the world’s first AI law by a major regulatory body. Last week, Members of the European Parliament agreed to advance the draft to the next stage , called trilogue, in which lawmakers and member states will work out the bill’s final details.
According to Margrethe Vestager , the EU’s tech regulation chief, the bloc is likely to agree on the law this year, and businesses could already start considering its implications.
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way,” Vestager said when the bill was first announced.
Unless regulatory efforts in Europe and the globe are sped up, we might risk repeating the approach of Oppenheimer of which Hinton is now sounding the alarm: “When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success.” Story by Ioanna Lykiardopoulou Ioanna is a writer at TNW. She covers the full spectrum of the European tech ecosystem, with a particular interest in startups, sustainabili (show all) Ioanna is a writer at TNW. She covers the full spectrum of the European tech ecosystem, with a particular interest in startups, sustainability, green tech, AI, and EU policy. With a background in the humanities, she has a soft spot for social impact-enabling technologies.
Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with Artificial intelligence Europe OpenAI Google neural networks AI Story by Ioanna Lykiardopoulou Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles deep tech UK police urged to double down on facial recognition deep tech Bedazzled by big tech, the UK’s AI summit is overlooking big issues Join TNW All Access Watch videos of our inspiring talks for free → deep tech German satellite will use AI to detect anomalies on asteroids and planets deep tech UK launches £100M AI fund to help treat incurable diseases The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
" |
236 | 2,023 | "Google’s AI could soon consume as much electricity as Ireland, study finds" | "https://thenextweb.com/news/googles-ai-could-consume-as-much-electricity-as-ireland" | "Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on October 11, 2023 Deep tech Google’s AI could soon consume as much electricity as Ireland, study finds The servers on which AI models run need a sh*t tonne of juice Amid the debate over the dangers of widespread AI development, an important concern may have been overlooked: the huge amount of energy required to train these large language models.
A new study published this week suggests that the AI industry could consume as much energy as a country like Argentina, Netherlands, or Sweden by 2027.
What’s more, the research estimates that if Google alone switched its whole search business to AI, it would end up using 29.3 terawatt-hours per year — equivalent to the electricity consumption of Ireland.
The paper was published by Alex de Vries at the VU Amsterdam School of Business and Economics.
In 2021, Google’s total electricity consumption was 18.3 TWh, with AI accounting for 10%–15% of it. However, the tech giant is rapidly scaling the AI parts of its business, most notably with the launch of its Bard chatbot, but also the integration of AI into its search engine.
Get your ticket NOW for TNW Conference - Super Earlybird is 90% sold out! Unleash innovation, connect with thousands of tech lovers and shape the future on June 20-21, 2024.
However, the scenario stipulated by the study assumes full-scale AI adoption utilising current hardware and software, which is unlikely to happen rapidly, said de Vries. One of the main hurdles to such widespread adoption is the limited supply of graphics processing units (GPUs) powerful enough to process all that data.
While entirely hypothetical, the study casts light on an often unstated impact of scaling up AI technologies. Data centres already use between 1-1.3% of all the world’s electricity and adding AI to existing applications like search engines could rapidly increase the share.
“It would be advisable for developers not only to focus on optimising AI, but also to critically consider the necessity of using AI in the first place, as it is unlikely that all applications will benefit from AI or that the benefits will always outweigh the costs,” advised de Vries.
Story by Siôn Geschwindt Siôn is a reporter at TNW. From startups to tech giants, he covers the length and breadth of the European tech ecosystem. With a background (show all) Siôn is a reporter at TNW. From startups to tech giants, he covers the length and breadth of the European tech ecosystem. With a background in environmental science, Siôn has a bias for solutions delivering environmental and social impact at scale.
Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with Google Artificial intelligence Google Chrome Story by Siôn Geschwindt Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles data security Netherlands building own version of ChatGPT amid quest for safer AI deep tech It’s ‘insane’ to let TikTok operate in Europe, NYU professor warns Join TNW All Access Watch videos of our inspiring talks for free → deep tech Social media has new moderation problems. This AI startup has a solution fintech ecommerce Google to pay €3.2M yearly fee to German news publishers The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
" |
238 | 2,023 | "EU fines Meta record €1.2B as feud grows over US data transfers" | "https://thenextweb.com/news/eu-fines-meta-record-e1-2b-as-feud-over-data-transfers-to-the-us-escalates" | "Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on May 22, 2023 Data and security EU fines Meta record €1.2B as feud over data transfers to the US escalates Meta said it sets 'a dangerous precedent' In a seminal moment for international data flows, the EU has fined Meta a record-breaking €1.2bn for privacy violations.
The penalty is the largest ever for a violation of GDPR, which was introduced to protect personal information. According to EU regulators, Meta broke the rules by transferring user data from the bloc to the US for processing.
The Facebook owner made these transfers on the basis of standard contractual clauses (SCCs), which govern the flow of personal data. But an EU investigation determined that SCCs don’t provide enough protection from US surveillance.
Andrea Jelinek, chair of the European Data Protection Board, called the infringement “very serious” because the transfers were systematic, repetitive, and continuous.
The <3 of EU tech The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now! “Facebook has millions of users in Europe, so the volume of personal data transferred is massive,” she said. “The unprecedented fine is a strong signal to organisations that serious infringements have far-reaching consequences.” Meta called the fine “unjustified and unnecessary” and said it would appeal the ruling.
Data borders The intervention could prove pivotal for data transfers more broadly. Lawmakers in the EU and US are currently developing a new transatlantic Data Privacy Framework that would clarify the requirements for moving information across borders.
Nick Clegg, Meta’s head of global affairs, said the new ruling had disregarded the progress being made on this issue. He called it “a dangerous precedent” for data transfers that imperils the foundations of an open internet.
“Without the ability to transfer data across borders, the internet risks being carved up into national and regional silos, restricting the global economy and leaving citizens in different countries unable to access many of the shared services we have come to rely on,” said Clegg.
Naturally, Clegg has a vested interest in easing data flows to the US, but he’s not alone in wanting the removal of digital borders. According to Janine Regan, Legal Director for Data Protection at law firm Charles Russell Speechlys , there’s political agreement on both sides of the Atlantic to resolve the issue.
“ It’s likely that an alternative transfer mechanism will be ready over the summer so that Meta does not have to completely suspend transatlantic transfers, but this will be little consolation for a company facing such a record-breaking fine,” she said.
Dangerous times for data violations The new ruling also serves as a warning to other companies that transfer data. Chris Linnell, Principal Data Protection Consultant at cyber security firm Bridewell called it “a stark reminder” that SSCs alone don’t adequately protect personal data.
He advised all organisations to undertake transfer risk assessments when processing personal data outside of the EU. In addition, he recommends regular ongoing reviews of compliance and potential risks to data subjects.
“Ultimately, contracts in place between parties will not act as a safeguard when recipient organisations have their own legal obligations to fulfil when it comes to national surveillance laws, such as FISA in the United States,” said Linnel.
Story by Thomas Macaulay Senior reporter Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with Privacy Data Facebook Story by Thomas Macaulay Popular articles 1 Musk mulls removing X, formerly Twitter, from EU to dodge disinformation laws 2 TikTok complies with EU demands against Israel-Hamas disinformation 3 German anti-racism agency quits X amid Israel-Hamas disinformation wave 4 EU online piracy on the rise as consumers feel the pinch 5 Ukraine’s fight against disinformation is creating a new startup sector Related Articles data security UK plan to police internet may be unlawful, force Wikipedia shutdown data security UK’s promise to protect encrypted messaging is ‘delusional,’ say critics Join TNW All Access Watch videos of our inspiring talks for free → deep tech This smart ring claims to be the lightest ever — and the first with haptic navigation data security New deal on EU-US data flows sparks privacy fears and business uncertainty The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
" |
239 | 2,020 | "Automated facial recognition breaches GDPR, says EU digital chief" | "https://thenextweb.com/news/automated-facial-recognition-breaches-gdpr-says-eu-digital-chief" | "Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on February 17, 2020 Deep tech Automated facial recognition breaches GDPR, says EU digital chief Commissioner Margrethe Vestager believes facial recognition in the EU requires consent Image by: Europäische Kommission Vertretung Deutschland The EU’s digital and competition chief has said that automated facial recognition breaches GDPR, as the technology fails to meet the regulation’s requirement for consent.
Margrethe Vestager, the European Commission’s executive vice president for digital affairs, told reporters that “as it stands right now, GDPR would say ‘don’t use it’, because you cannot get consent,” EURACTIV revealed today.
GDPR classes information on a person’s facial features as biometric data, which is labeled as “sensitive personal data.” The use of such data is highly restricted, and typically requires consent from the subject — unless the processing meets a range of exceptional circumstances.
These exemptions include it being necessary for public security. This has led the UK’s data regulator to allow police to use facial recognition CCTV, as it met “the threshold of strict necessity for law enforcement purposes.” [Read: Here’s how face recognition tech can be GDPR compliant] Get your ticket NOW for TNW Conference - Super Earlybird is 90% sold out! Unleash innovation, connect with thousands of tech lovers and shape the future on June 20-21, 2024.
Vestager told reporters that the Commission will further investigate automated facial recognition before introducing legislation, allowing member states to make their own domestic decisions in the meantime.
“So what we will say in the paper in a very lawyered up language is, let’s pause and figure out if there are any [situations], and if any, under what circumstances facial recognition remotely should be authorized”, she said.
Her comments reflect the EU’s recent cancellation of plans to introduce a five-year moratorium on the technology.
EU seeks to differentiate itself Vestager has become one of the most high-profile EU politicians through her work as the bloc’s competition commissioner, where she slappedSilicon Valley giants with multiple billion-dollar fines, leading the New York Times to dub her “the most powerful regulator of big tech on the planet.” In December 2019, she added the role of “Commissioner for Europe fit for the Digital Age” to her antitrust portfolio.
Vestager’s enhanced powers have made her a key player in the EU’s ambitions to create tech companies that can compete globally by emphasizing the bloc’s unique strengths.
“China has the data, the US has the money, and we have the purpose”, she said, adding that the EU should retain its “willingness to protect the fundamental values” that had “made us one of the most attractive places to live on the planet ever.” Y ou’re here because you want to learn more about artificial intelligence. So do we. So this summer, we’re bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify, RSA, and Medium, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses.
Get your early bird ticket and check out the full Neural track.
Story by Thomas Macaulay Senior reporter Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with Artificial intelligence Artificial intelligence (video games) Data European Union European Commission Facial GDPR Story by Thomas Macaulay Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles deep tech Opinion: OpenAI’s DALL-E 2 is the big tech equivalent of ‘soylent green’ data security A third of GDPR fines for social media platforms linked to child data protection Join TNW All Access Watch videos of our inspiring talks for free → ecosystems Poland wants EU to stop sleeping on digital IDs deep tech The most mind-blowing Neural stories of 2021 The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
" |
242 | 2,023 | "It’s ‘insane’ to let TikTok operate in Europe, NYU professor warns" | "https://thenextweb.com/news/insane-tiktok-in-europe-nyu-professor-scott-galloway" | "Toggle Navigation News Events TNW Conference 2024 June 20 & 21, 2024 TNW Vision: 2024 All events Spaces Programs Newsletters Partner with us Jobs Contact News news news news Latest Deep tech Sustainability Ecosystems Data and security Fintech and ecommerce Future of work More Startups and technology Investors and funding Government and policy Corporates and innovation Gadgets & apps Early bird Business passes are 90% SOLD OUT 🎟️ Buy now before they are gone → This article was published on October 3, 2023 Deep tech It’s ‘insane’ to let TikTok operate in Europe, NYU professor warns TikTok's presence in the west is under mounting scrutiny The decision to permit TikTok in Europe is “insane,” according to Professor Scott Galloway of New York University.
At an event in Helsinki last week, Galloway described TikTok as probably “the most ascendant technology company in history” — and “a national defence threat.” The professor’s concerns stem from two key factors: TikTok’s alleged links to the Chinese government and the platform’s peerless content recommendation system.
“ They have implanted a neural jack into the web matter of our youth,” Galloway said.
Get your ticket NOW for TNW Conference - Super Earlybird is 90% sold out! Unleash innovation, connect with thousands of tech lovers and shape the future on June 20-21, 2024.
TikTok has denied claims the Chinese Communist Party (CCP) influences the app’s content and receives user data directly from the platform. But lawmakers in the west have raised concerns about Beijing’s connections to ByteDance , the parent company of TikTok.
Critics have accused the app of facilitating espionage. They note that a national security law requires ByteDance to provide data to Chinese authorities on request.
They have also sounded the alarm about the app’s AI-based video recommendations, which personalises the content in user feeds. The system has helped TikTok become the world’s most popular app. In Europe alone, the platform has more than 150 million monthly users.
Galloway fears this audience is being brainwashed by the CCP.
“If I were them, I would put my thumb delicately, insidiously, covertly, elegantly, on the scale of anti-Western content and on the scale of pro-China content,” he said at a talk hosted by WithSecure , a Finnish cyber security firm.
“I would raise a generation of European and US military, civic, nonprofit, and corporate leaders who just feel a little bit shittier about democracy… and that Taiwan should probably be a province of China.” Galloway spoke amid mounting pressure on TikTok in Europe. In February, the European Commission prohibited the use of TikTok on staff phones. Since then, similar bans have been announced in several member states, as well as the UK and Norway. According to Galloway, even stricter restrictions should have been implemented.
“I think the fact that we let TikTok come into the US and to Europe is insane,” Galloway said.
Among European lawmakers, however, perspectives on TikTok are mixed. Some politicians are equally concerned about surveillance by Silicon Valley tech firms, which have been targeted by a recent slew of EU regulations.
Galloway has also fallen out with a US tech giant. In August, the NYU professor said he’d been locked out of his X account after a feud with Elon Musk.
Story by Thomas Macaulay Senior reporter Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Thomas is a senior reporter at TNW. He covers European tech, with a focus on deeptech, startups, and government policy.
Get the TNW newsletter Get the most important tech news in your inbox each week.
Also tagged with Artificial intelligence Surveillance TikTok Story by Thomas Macaulay Popular articles 1 New erotic roleplaying chatbots promise to indulge your sexual fantasies 2 UK plan to lead in generative AI ‘unrealistic,’ say Cambridge researchers 3 New AI tool could make future vaccines ‘variant-proof,’ researchers say 4 3D-printed stem cells could help treat brain injuries 5 New technique makes AI hallucinations wake up and face reality Related Articles deep tech Google’s AI could soon consume as much electricity as Ireland, study finds deep tech This smart ring claims to be the lightest ever — and the first with haptic navigation Join TNW All Access Watch videos of our inspiring talks for free → sustainability This AI can tell if your home is wasting energy — just by looking at it data security Netherlands building own version of ChatGPT amid quest for safer AI The heart of tech More TNW Media Events Programs Spaces Newsletters Jobs in tech About TNW Partner with us Jobs Terms & Conditions Cookie Statement Privacy Statement Editorial Policy Masthead Copyright © 2006—2023, The Next Web B.V. Made with <3 in Amsterdam.
" |
245 | 2,023 | "Sam Altman’s World Tour Hopes to Reassure AI Doomers | WIRED" | "https://www.wired.com/story/sam-altman-world-tour-ai-doomers" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Morgan Meaker Business Sam Altman’s World Tour Hopes to Reassure AI Doomers Photograph: Win McNamee/Getty Images Save this story Save Save this story Save The excitement around the London arrival of OpenAI CEO Sam Altman was palpable from the queue that snaked its way around the University College London building ahead of his speech on Wednesday afternoon. Hundreds of eager-faced students and admirers of OpenAI’s chatbot ChatGPT had come here to watch the UK leg of Altman’s world tour, where he expects to travel to around 17 cities. This week, he has already visited Paris and Warsaw. Last week he was in Lagos. Next, he’s on to Munich.
But the queue was soundtracked by a small group of people who had traveled to loudly express their anxiety that AI is advancing too fast.
“Sam Altman is willing to bet humanity on the hope of some sort of transhumanist utopia,” one protester shouted into a megaphone. Ben, another protester, who declined to share his surname in case it affects his job prospects, was also worried. “We’re particularly concerned about the development of future AI models which might be existentially dangerous for the human race.” Speaking to a packed auditorium of close to 1,000 people, Altman seemed unphased. Wearing a sharp blue suit with green patterned socks, he talked in clipped answers, always to the point. And his tone was optimistic, as he explained how he thinks AI could reinvigorate the economy. “I'm excited that this technology can bring the missing productivity gains of the last few decades back,” he said. But, while he didn’t mention the protests outside, he did admit to concerns over how generative AI could be used to spread disinformation.
“Humans are already good at making disinformation, and maybe the GPT models make it easier. But that’s not the thing I’m afraid of,” he said. “I think one thing that will be different [with AI] is the interactive, personalized, persuasive ability of these systems.” Although OpenAI plans to build in ways to make ChatGPT refuse to spread disinformation, and plans to create monitoring systems, he said, it will be difficult to mitigate these impacts when the company releases open-source models to the public—as it announced several weeks ago that it intends to do. “The OpenAI techniques of what we can do on our own systems won't work the same.” Despite that warning, Altman said it's important that artificial intelligence not be overregulated while the technology is still emerging. The European Parliament is currently debating legislation called the AI Act , new rules that would shape the way companies can develop such models and might create an AI office to oversee compliance. The UK, however, has decided to spread responsibility for AI between different regulators, including those covering human rights, health and safety, and competition, instead of creating a dedicated oversight body.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg “I think it's important to get the balance right here,” Altman said, alluding to debates now taking place among policymakers around the world about how to build rules for AI that protect societies from potential harm without curbing innovation. “The right answer is probably something between the traditional European-UK approach and the traditional US approach,” Altman said. “I hope we can all get it right together this time.” He also spoke briefly about OpenAI’s commercial strategy of selling access to its API, a type of software interface, to other businesses. The company wants to offer intelligence as a service, he says. “What we'd like is that a lot of people integrate our API. And then as we make the underlying model better, it lifts the whole world of products and services up. It's a very simple strategy.” Listening to what people want from that API has been a big part of his world trip, he said.
Altman also talked about his vision for AI-assisted humans, where people are enhanced and not replaced by technology. “I think there will be way more jobs on the other side of this technological revolution,” he said. “I'm not a believer that this is the end of work at all.” He added: “I think we now see a path where we build these tools that get more and more powerful. And there will be trillions of copies being used in the world, helping individual people be more effective, capable of doing way more.” Before the trip, Altman said on Twitter the purpose of his world tour was to meet with OpenAI users and people interested in AI in general. But in London, it looked like the company was also trying to cement its leader’s reputation as the person who would usher the world into the AI age. Audience members asked him about his vision for AI, but also about the best way to educate their children and even how to build life on Mars. In an onstage discussion with UCL professors, one panelist said she was here to represent humanity. Altman uncharacteristically jumped in to stress his company was not working against it. “I represent humanity too,” he said.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics artificial intelligence OpenAI ChatGPT Will Knight Will Knight Steven Levy Steven Levy Reece Rogers Will Knight Will Knight Peter Guest Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
246 | 2,023 | "The iPhone 15 Opts for Intuitive AI, Not Generative AI | WIRED" | "https://www.wired.com/story/apple-iphone-15-opts-for-intuitive-ai-not-generative-ai" | "Open Navigation Menu To revist this article, visit My Profile, then View saved stories.
Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories.
Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Khari Johnson Business The iPhone 15 Opts for Intuitive AI, Not Generative AI Photograph: David Paul Morris/Getty Images Save this story Save Save this story Save Tech product launches in 2023 have become predictable: Everything now comes with generative AI features that will serve up chatty but knowledgeable text or mind-blowing images. The rollout of the iPhone 15 this week shows Apple opting to Think Different.
The new device comes with the A17 Pro processor, an Apple-designed chip to put more power behind machine-learning algorithms. But the features highlighted at the launch event yesterday were generally subtle, not mind expanding. The company appears focused on AI that is intuitive not generative, making artificial intelligence a part of your life that smoothes over glitches or offers helpful predictions without being intrusive. Apple made a similar choice to ignore the generative AI bandwagon earlier this year at its developer conference in June.
A new voice-isolation feature for the iPhone 15 , for example, uses machine learning to recognize and home in on the sound of your voice, quieting background noise on phone calls. As usual for iPhone launches, yesterday’s event spent ample time on the power of the new phone’s camera and image-enhancing software. Those features lean on AI too, including automatic detection of people, dogs, or cats in a photo frame to collect depth information to help turn any photo into a portrait after the fact.
Additional AI-powered services are also coming to newer iPhone models via the new iOS 17 operating system, due out next week. They include automated transcription of voicemails, so a person can see who’s calling before picking up a phone call, and more extensive predictive text recommendations from the iPhone keyboard. Neither is as flashy as a know-it-all chatbot. But by making life easier, they just might convince people to spend more time with their phones, pushing up usage of Apple’s services.
Apple’s intuitive AI is also at work in some new accessibility features.
For people who are blind or have low vision, a new Point and Speak feature in the Magnifier app will let them aim the camera at objects with buttons like a microwave and hear their phone say which their finger is touching. For people with medical conditions like ALS that can rob a person of the ability to speak, iOS 17 can create a synthetic voice that sounds like them after they read 15 minutes of text prompts.
Smartphones have become hard to improve on with transformative new features, and overall the iPhone 15 rollout was underwhelming, says Tuong Nguyen, director analyst at Gartner covering emerging technology. But Apple excels at the kind of interface design that makes subtle AI-powered features work.
Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Nguyen thinks the adaptive audio feature that blends music or calls with nearby voices or ambient sound, due out this fall for AirPods, and the new “double tap” gesture that controls an Apple Watch Series 9 with a simple tap of index finger and thumb—both powered by machine learning—have the potential to become features so intuitive that they become a standard that other companies emulate.
Rather than integrating reality-distorting image generation into the iPhone or following Google’s lead and launching an ethically questionable feature that can make people disappear from photos, Apple executives yesterday lauded features that enhance reality, like a new 5X zoom on the iPhone 15, better image quality in low light, and adding spatial video shot with the iPhone 15 for Apple Vision Pro.
“It’s about leading with the value for the consumer, not using buzzwords or technical terms that consumers don’t necessarily understand,” says Carolina Milanesi, a consumer tech analyst at Creative Strategies. She says improvements that allow for better color, or zoom, or automate portraits are important for Apple because the camera is a major driver of smartphone purchases.
Generative AI is a growing use case for smartphones as assistants like ChatGPT , image generation, and other apps that rely on the technology become more common. Apple’s new A17 Pro chip’s “neural engine,” tuned to power machine-learning algorithms more efficiently, can most likely boost generative AI apps that run locally on a device. And despite Apple’s avoidance of gen AI talk at launch events so far, Bloomberg reporting says Apple is developing its own generative AI framework named Ajax.
Nguyen of Gartner suspects that Apple’s own generative AI projects will prominently appear in its products someday, but that the company probably won’t call it generative AI or even discuss that work until it is mature enough to be presented in a distinctive way. “I think if they talk about it the way everyone else has, then it seems more ‘Me too’ than is typical of Apple,” he says.
You Might Also Like … 📨 Make the most of chatbots with our AI Unlocked newsletter Taylor Swift, Star Wars, Stranger Things , and Deadpool have one man in common Generative AI is playing a surprising role in Israel-Hamas disinformation The new era of social media looks as bad for privacy as the last one Johnny Cash’s Taylor Swift cover predicts the boring future of AI music Your internet browser does not belong to you 🔌 Charge right into summer with the best travel adapters , power banks , and USB hubs Senior Writer X Topics apple tim cook iPhone artificial intelligence smartphones phones Mobile UX/UI Paresh Dave Reece Rogers Caitlin Harrington Steven Levy Niamh Rowe Reece Rogers Paresh Dave Khari Johnson Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia
" |
247 | 2,022 | "Is AI Art a ‘Toy’ or a ‘Weapon’? - The Atlantic" | "https://www.theatlantic.com/technology/archive/2022/09/dall-e-ai-art-image-generators/671550" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce Is AI Art a ‘Toy’ or a ‘Weapon’? A prolific AI artist shares his perspective on the controversial medium.
Editor’s Note: This article is part of our coverage of The Atlantic Festival. Learn more and watch festival sessions here.
Earlier this year, the technology company OpenAI released a program called DALL-E 2, which uses artificial intelligence to transform text into visual art. People enter prompts (“ plasticine nerd working on a 1980s computer ”) and the software returns images that showcase humanlike vision and execution, veer into the bizarre, and might even tease creativity. The results were good enough for Cosmopolitan , which published the first-ever AI-generated magazine cover in June—an image of an astronaut swaggering over the surface of Mars—and they were good enough for the Colorado State Fair, which awarded an AI artwork first place in a fine-art competition.
OpenAI gave more and more people access to its program, and those who remained locked out turned to alternatives like Craiyon and Midjourney. Soon, AI artwork seemed to be everywhere, and people started to worry about its impacts.
Trained on hundreds of millions of image-text pairs, these programs’ technical details are opaque to the general public—more black boxes in a tech ecosystem that’s full of them. Some worry they might threaten the livelihoods of artists , provide new and relatively easy ways to generate propaganda and deepfakes , and perpetuate biases.
Yet Jason Scott, an archivist at the Internet Archive, prolific explorer of AI art programs, and traditional artist himself, says he is “no more scared of this than I am of the fill tool”—a reference to the feature in computer paint programs that allows a user to flood a space with color or patterns. In a conversation at The Atlantic Festival with Adrienne LaFrance, The Atlantic ’s executive editor, Scott discussed his quest to understand how these programs “see.” He called them “toys” and “parlor game[s],” and did a live demonstration of DALL-E 2, testing prompts such as “the moment the dinosaurs went extinct illustrated in Art Nouveau style” or “Chewbacca on the cover of The Atlantic magazine in the style of a Renaissance painting” (the latter of which resulted in images that looked more canine than Wookiee). Scott isn’t naive about the greater issues at play—“Everything has a potential to be used as a weapon”—but at least for a moment, he showed us that the tech need not be apocalyptic.
Their conversation has been edited and condensed for clarity.
Watch: Atlantic executive editor Adrienne LaFrance in conversation with Jason Scott Adrienne LaFrance: When we talk about AI art, what do we even mean? How does it work? Jason Scott: So what we’re calling “AI art”—by the way, they’re now calling it “synthetic media”—it’s the idea of using analysis of deep ranges of images, not just looking at them as patterns or samples, but actually connecting their captions and their contexts up against pictures of all sorts, and then synthesizing new versions from all that.
LaFrance: So basically a giant database of images that can be drawn from to call to mind the thing that you prompt it to make.
Scott: Right.
LaFrance: And why is it exploding now? It seems like various forms of machine learning and AI have really accelerated in recent years.
Scott: They let it out of the lab and let regular people play with the toys. Across the companies that are doing this, some are taking the model of We’ll let everyone play with it now—it’s part of the world.
LaFrance: When you think about the implications for this sort of technology, give us an overview of how this is going to change the way we interact with art, or whatever other industries come to mind. For instance, at The Atlantic we have human artists making art. I’m sure they might have strong feelings about the idea of machines making art. What other industries would be potentially affected? Scott: Machines are becoming more and more capable of doing analysis against images, text, music, movies. There are experimental search engines out there that you can play with and say things like “I need to see three people around a laptop.” And previously it would have to be three people and the laptop, but it actually is starting to make matches where there’s three people in the room. And the weirder and more creative you get with this toy, the more fun it gets. I see a future where you’ll be able to say, “Could I read a book from the 1930s where it’s got a happy ending and it takes place in Boston?” Or, “Can I have something where they fell in love but they’re not in love at the end?” Read: Of Gods and machines LaFrance: I have more questions, but I think now it’d be a good time to start showing people what we mean. Do you have some examples? Scott: I have some examples of things that I did. So this is “detailed blueprints on how to build a beagle.” "Detailed blueprints on how to construct a beagle" pic.twitter.com/blZSVJyMXd LaFrance: So these are prompts that you gave the model, and this is what came out of it? Scott: Yes. For the people who don’t know how this whole game works, it’s pretty weird. You usually type in some sort of a line to say, “I’m looking for something like this ,” and then it creates that , and then people get more and more detailed, because they’re trying to push it. Think of it less as programming than saying to somebody, “Could you go out there and dance like you’re happy and your kid was just born?” And you’ll watch what happens. So it’s kind of amorphous. This is a lion using a laptop in the style of an old tapestry.
This is Santa Claus riding a motorcycle in the style of 1970s Kodachrome.
This is Godzilla at the signing of the Declaration of Independence.
This is a crayon drawing of a labor action.
These are bears doing podcasts.
This is GoPro footage of the D-Day landing.
I’m always playing with it, and the reason you’re hearing all those strange prompts from me is because I want to understand: What are these systems seeing? What are they doing? It’s so easy as a parlor game to say, “Draw a cellphone as if it was done as a Greco-Roman statue.” But what about doing a bittersweet sky, or trying to draw a concerned highway? What does it see? LaFrance: What does this suggest to you about the nature of art? This gets to be sort of an existential question, but is it still human-made art in the way that we think of it, and should we be bothered by that? I mean, we use all sorts of tools to make art.
Scott: Everyone is super entitled to their own opinion. All I can say is, I did drawings in a zine in my teens; I was a street caricaturist; my mother was a painter; my father does painting; my brother’s a landscape artist. And coming from that point of view, I am no more scared of this than I am of the fill tool or the clone brush [in Photoshop]. Everything has a potential to be used as a weapon—imagery, words, music, text. But we also see an opportunity here for people who never knew that they had access to art. I can almost hear the gears crack and start moving again when I go to somebody and I’m like, “Could you give me something to draw?” And they do it and they see how it goes. I can’t get angry at that particular toy. But I won’t pretend that this toy will stay in its own way neutral, or even is neutral now.
LaFrance: I was talking to a colleague about these sorts of tools the other week, and we were really compelled by the idea of being able to visualize dreams. What other sorts of things—fiction comes to mind—can we imagine but don’t normally get to visualize? Read: The AI-art gold rush is here Scott: I love telling these AIs to draw “exquisite lattice work”—using phrases like exquisite or rare —or give me “leather with gold inlay on a toaster,” and watching it move into that world and design things in seconds that aren’t perfect, but are fun.
LaFrance: We’re going to experiment, which is always dangerous. You’re never supposed to do stuff in real time. But I have some prompts for you.
Scott: This is DALL-E. There are many others. Think of it just like early web servers or early web browsers. There’s a bunch of companies with various people funding them or doing things their own way.
[Scott now leads LaFrance through a demonstration of DALL-E 2: It’s included in the video embedded above.] Scott: We see the ability to do everything from intricate pen-and-ink drawings to cartoons. People are using it now to make all sorts of textures for video games; they are making art along a theme that they need to cover an entire wall of a coffee shop; they’re using it to illustrate their works. People are trying all sorts of things with this technology and are excited by it.
" |
248 | 2,019 | "Where does your plastic go? Global investigation reveals America's dirty secret | Recycling | The Guardian" | "https://www.theguardian.com/us-news/2019/jun/17/recycled-plastic-america-global-crisis" | "A Guardian report from 11 countries tracks how US waste makes its way across the world – and overwhelms the poorest nations Editor’s pick: best of 2019. We’re bringing back some of our favorite stories of the past year.
"https://support.theguardian.com/contribute?acquisitionData=%7B%22source%22%3A%22GUARDIAN_WEB%22%2C%22componentType%22%3A%22ACQUISITIONS_EDITORIAL_LINK%22%2C%22componentId%22%3A%22USeoy2019_standfirst_bestof19%22%2C%22campaignCode%22%3A%22USeoy2019%22%7D&INTCMP=USeoy2019\">Support the Guardian’s journalism in 2020 News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing Plastic bottles bundled in a recycling facility. Bales such as these travel around the world on shipping containers. Photograph: Bloomberg via Getty Images Best of 2019 Where does your plastic go? Global investigation reveals America's dirty secret Plastic bottles bundled in a recycling facility. Bales such as these travel around the world on shipping containers. Photograph: Bloomberg via Getty Images A Guardian report from 11 countries tracks how US waste makes its way across the world – and overwhelms the poorest nations Editor’s pick: best of 2019. We’re bringing back some of our favorite stories of the past year.
Support the Guardian’s journalism in 2020 Erin McCormick , Bennett Murray , Carmela Fonbuena , Leonie Kijewski , Gökçe Saraçoğlu , Jamie Fullerton , Alastair Gee and Charlotte Simmonds Mon 17 Jun 2019 01.00 EDT What happens to your plastic after you drop it in a recycling bin? According to promotional materials from America’s plastics industry, it is whisked off to a factory where it is seamlessly transformed into something new.
This is not the experience of Nguyễn Thị Hồng Thắm, a 60-year-old Vietnamese mother of seven, living amid piles of grimy American plastic on the outskirts of Hanoi. Outside her home, the sun beats down on a Cheetos bag; aisle markers from a Walmart store; and a plastic bag from ShopRite, a chain of supermarkets in New Jersey, bearing a message urging people to recycle it.
Nguyễn Thị Hồng Thắm is paid $6.50 a day to sort recycling on the outskirts of Hanoi.
Tham is paid the equivalent of $6.50 a day to strip off the non-recyclable elements and sort what remains: translucent plastic in one pile, opaque in another.
A Guardian investigation has found that hundreds of thousands of tons of US plastic are being shipped every year to poorly regulated developing countries around the globe for the dirty, labor-intensive process of recycling. The consequences for public health and the environment are grim.
A team of Guardian reporters in 11 countries has found: Last year, the equivalent of 68,000 shipping containers of American plastic recycling were exported from the US to developing countries that mismanage more than 70% of their own plastic waste.
The newest hotspots for handling US plastic recycling are some of the world’s poorest countries, including Bangladesh, Laos, Ethiopia and Senegal, offering cheap labor and limited environmental regulation.
In some places, like Turkey , a surge in foreign waste shipments is disrupting efforts to handle locally generated plastics.
With these nations overwhelmed, thousands of tons of waste plastic are stranded at home in the US, as we reveal in our story later this week.
These failures in the recycling system are adding to a growing sense of crisis around plastic, a wonder material that has enabled everything from toothbrushes to space helmets but is now found in enormous quantities in the oceans and has even been detected in the human digestive system.
Reflecting grave concerns around plastic waste, last month, 187 countries signed a treaty giving nations the power to block the import of contaminated or hard-to-recycle plastic trash. A few countries did not sign. One was the US.
A new Guardian series, United States of Plastic, will scrutinize the plastic crisis engulfing America and the world, publishing several more stories this week and continuing for the rest of 2019.
“People don’t know what’s happening to their trash,” said Andrew Spicer, who teaches corporate social responsibility at the University of South Carolina and sits on his state’s recycling advisory board. “They think they’re saving the world. But the international recycling business sees it as a way of making money. There have been no global regulations – just a long, dirty market that allows some companies to take advantage of a world without rules.” Migrant workers sort through plastic bottles at the Thaiplastic Recycle Group plant in Samut Sakhon, outside Bangkok, Thailand.
Where America’s recycling lands Plastic only came into mass consumer use in the 1950s, but in the Pacific Garbage Patch it is already thought to be more common than plankton. Officials around the globe have banned particularly egregious plastic pollutants, such as straws and flimsy bags, yet America alone generates 34.5m tons of plastic waste each year, enough to fill Houston’s Astrodome stadium 1,000 times.
Of the 9% of America’s plastic that the Environmental Protection Agency estimated was recycled in 2015, China and Hong Kong handled more than half: about 1.6m tons of our plastic recycling every year. They developed a vast industry of harvesting and reusing the most valuable plastics to make products that could be sold back to the western world.
But much of what America sent was contaminated with food or dirt, or it was non-recyclable and simply had to be landfilled in China. Amid growing environmental and health fears, China shut its doors to all but the cleanest plastics in late 2017.
Since the China ban, America’s plastic waste has become a global hot potato, ping-ponging from country to country. The Guardian’s analysis of shipping records and US Census Bureau export data has found that America is still shipping more than 1m tons a year of its plastic waste overseas, much of it to places that are already virtually drowning in it.
A red flag to researchers is that many of these countries ranked very poorly on metrics of how well they handle their own plastic waste. A study led by the University of Georgia researcher Jenna Jambeck found that Malaysia, the biggest recipient of US plastic recycling since the China ban, mismanaged 55% of its own plastic waste, meaning it was dumped or inadequately disposed of at sites such as open landfills. Indonesia and Vietnam improperly managed 81% and 86%, respectively.
“We are trying so desperately to get rid of this stuff that we are looking for new frontiers,” said Jan Dell, an independent engineer, whose organization The Last Beach Cleanup works with investors and environmental groups to reduce plastic pollution. “The path of least resistance is to put it on a ship and send it somewhere else – and the ships are going further and further to find some place to put it,” she said.
Take Vietnam. Minh Khai, a village on a river delta near Hanoi, is the center of a waste management cottage industry. Rubbish from across the world, inscribed in languages from Arabic to French, lines almost every street in this community of about 1,000 households. Workers in makeshift workshops churn out recycled pellets amid toxic fumes and foul stench from the truckloads of scrap that are transported there every day. Even Minh Khai’s welcome arch, adorned with bright red flags, is flanked by plastic waste on both sides.
In 2018, the US sent 83,000 tons of plastic recycling to Vietnam. On the ground, America’s footprint is clear: a bag of York Peppermint Patties from Hershey, with US labeling, and an empty bag from a chemical coatings manufacturer in Ohio.
“We’re really scared of the plastic fumes, and we don’t dare to drink the water from underground here,” said Nguyễn Thị Hồng Thắm, the plastic sorter, wearing thick gloves, a face mask and a traditional Vietnamese conical hat to protect herself from the sun.
“We don’t have money so we don’t have any choice but to work here.” While the exact health effects of workers’ exposure to plastic recycling operations have not been well studied, the toxic fumes resulting from the burning of plastics or plastic processing can cause respiratory illness. Regular exposure can subject workers and nearby residents to hundreds of toxic substances, including hydrochloric acid, sulfur dioxide, dioxins and heavy metals, the effects of which can include developmental disorders, endocrine disruption, and cancer.
Once the plastic is sorted by workers like Tham, others feed the scrap into grinders before putting it through densifiers that melt and condense the scrap so it can be molded into pellets.
The village of Minh Khai is the center of a waste management cottage industry.
Waste pollutes beaches in Vietnam’s Bình Thuận province.
Business continues to boom in Minh Khai despite tightening rules.
The Vietnamese prime minister, Nguyễn Xuân Phúc, ordered a tightening on scrap standards in July 2018, and legal monthly imports were cut to one-tenth of what they had been. As of April, more than 23,400 shipping containers of scrap remain held up in customs. But business continues to boom in Minh Khai. Tham said that scrap is still arriving from Haiphong, northern Vietnam’s largest port, and other parts of the country every day, and records show a significant rebound in imports.
As countries like Vietnam, Malaysia and Thailand banned imports, records show the plastic waste fanning out to a host of new countries. Shipments began making their way to Cambodia , Laos, Ghana, Ethiopia, Kenya and Senegal, which had previously handled virtually no US plastic.
The Guardian found that each month throughout the second half of 2018, container ships ferried about 260 tons of US plastic scrap into one of the most dystopian, plastic-covered places of all: the Cambodian seaside town of Sihanoukville, where, in some areas, almost every inch of the ocean is covered with floating plastic and the beach is nothing but a glinting carpet of polymers.
Heng Ngy lives in a wooden house over a sea of plastic.
“I cannot accept plastic being imported into our country,” said a resident, Heng Ngy, 58. Ngy and his wife live in a wooden house on stilts that seems to hover on a sea of plastic. A pungent stench wafts up to the open-aired rooms.
Cambodia’s waste problem is believed to stem from its own use of plastic and a lack of any system for dealing with it. No one interviewed in Sihanoukville had any idea that plastic recycling was being exported from the United States, and what happened to the plastic after it arrived is unclear.
Experts estimate that 20% to 70% of plastic entering recycling facilities around the globe is discarded because it is unusable – so any plastic being recycled at Sihanoukville would inevitably result in more waste there.
Alex Gonzalez-Davidson, the co-founder of the Cambodian environmental organization Mother Nature, said his organization had not been aware of the issue. But “if it works, they will bring more and more”, he said. For now, shipments of plastic appear to have tailed off.
Waste is spread on the beach in Sihanoukville, Cambodia.
How plastic waste fuels a global business How does your plastic get from your curbside to a village in south-east Asia? Through a trading network that crosses oceans and traverses continents. It’s a network that is complex, at times nefarious, and in which few consumers understand their role. Now, that network is at a breaking point.
Plastic’s first stop on its months-long journey is a recycling facility where it is sorted into bales based on its type – soda bottles, milk jugs and clamshell-style containers, for instance, are all made of subtly different kinds – and readied for sale.
Waste plastic is a commodity, and recycling brokers search across the US and abroad for buyers who will want to melt the plastic down, turn it into pellets, and make those pellets into something new.
In the past, it made economic sense to ship the plastic to Asia, because shipping companies that transport China’s manufactured goods to the US end up with thousands of empty shipping containers to carry back. In the absence of American goods to fill them, the companies have been willing to ship out America’s recycling at rock-bottom rates.
Steve Wong links your recycling with international purchasers.
Steve Wong, a Hong Kong-based businessman, is one of the middlemen who connects your recycling with international buyers. “At one time, I was one of the biggest exporters in the world,” he said, worth millions. Now, Wong said, his company, the Hong-Kong based Fukutomi Recycling , was deep in debt.
Wong’s problem is hardly a lack of supply. Each month the equivalent of thousands of shipping containers worth of recyclable plastics, which used to be exported, are piling up all over the United States. Nor is his worry a shortage of demand for plastic. It is desperately needed by factories in China for manufacturing into myriad new products – from toys and picture frames to garden gazebos.
What is nearly killing his business is the fact that many countries have soured on the recycling industry, after unscrupulous operators set up shop, operating as cheaply as possible, with no regard for the environment or local residents.
“In our industry, if you do it properly, you save the environment,” Wong said. “If you do it improperly, you destroy the environment.” As far as profits go, the numbers just barely favor recycling.
Wong said he might spend $150 to buy a ton of plastic scrap from a US recycler. Once it is shipped abroad, sold to a processor, turned into pellets and then again shipped to a manufacturer, the seller might ask as much as $800 per ton.
Yet the cost of similar virgin plastic, which is often higher quality, is just $900 to $1,000 a ton.
Wong believes the answer in the future will be to process the material closer to the United States. That is why he has planned trips to meet with government officials in the Dominican Republic and Haiti, and why, on a recent Wednesday, Wong crisscrossed back and forth through heavy traffic in the Mexican city of Monterrey, located about 150 miles south of Laredo, Texas.
Wong, a trim 61-year-old dressed head-to-toe in khaki like a safari hunter, was working to set up a new plastics recycling factory for an investor who hopes to one day process US plastic.
At one reseller – a corrugated-metal warehouse piled floor-to-ceiling with plastic that included shimmery sheaths of wrapping from US retail stores – Wong wanted to test the quality of the supply. He filled a baggie with ground-up flakes of black plastic from picking crates, then took a cigarette lighter and lit one of the flakes on fire. He carefully sniffed the smoke to get a sense of what variety of plastic it was.
At Wong’s next stop, an existing Monterrey recycling processor, you could get a sense of the work the new factory might do.
Plastic is stretched and formed into pellets in a variety of colors.
A rudimentary plastic processing machine stretched 40ft across the bare dirt of the warehouse floor. The processor takes rejected car parts and grinds them up into confetti-sized flakes. Workers feed these flakes into a flume that channels them past a heater to melt them. The melted plastic is pressed into long, white strings, which are stretched across the room and allowed to harden. At that point, they are chopped into pellets a little bigger than rice grains.
Wong said he would like to build more modern factories with up-to-date systems for eliminating toxic releases to the air and water. But he said he was sure that many of his less scrupulous competitors would keep exporting on the cheap. He suggested that even in countries that had banned plastic imports, the material continued to be smuggled in.
“Recyclers have set up factories in all these countries, but they don’t have enough supply. So, even though it is smuggling, even though it is not legal, they still have to do what it takes to get the plastic.” With US plastic landing in countries that have never seen it in such quantities, local residents are crying foul.
In the Philippines , about 120 shipping containers a month are arriving in Manila and an industrial zone in the former US military base at Subic Bay. Records indicate they were filled with plastic scrap shipped from such places as Los Angeles, Georgia and the Port of New York-Newark.
From the Manila port, shipping records and Philippines customs documents show, some of the US plastic was transported to Valenzuela City. The area, on the outskirts of the Philippine capital, is known as “Plastic City” and residents are increasingly concerned about the number of processing factories sprouting in their midst.
“You smell that?” said a shopkeeper, Helen Lota, 47, as she stood in front of her neighborhood convenience store at noon one day last month. “That’s nothing. It’s worse towards evening. It gets suffocating,” “There are times it’s really hard to breathe. Many of us here are getting sick,” said Lota. “I had my daughter’s cough checked in the hospital. But the X-ray is clear. The coughing must be caused by the smell.” Noticing Lota complaining about the plastic problem, passersby stopped to chime in. “My mother’s cough won’t go away, probably because of the smell,” said Renante Bito, 38.
A worker puts styrofoam in a shredder at a recycling plant in Valenzuela City, north of Manila.
Yet recycling is also one of the area’s biggest income sources. Officials and residents interviewed by the Guardian said they had assumed the plastic being processed in their town was the Philippines’ own waste. None realized that some of it was being shipped from the US. Representatives for the factories receiving US waste declined to be interviewed.
In Turkey, US plastic imports may be putting an entire profession at risk. Since China closed its doors, the amount of plastic recycling Turkey takes in from abroad has soared, from 159,000 to 439,000 tons in two years.
Each month, about 10 ships pull into the ports of Istanbul and Adana, carrying about 2,000 tons of cheap US scrap plastic that is no longer wanted by China. Most of it comes from the ports of Georgia, Charleston, Baltimore and New York. Some of it is described in shipping records as “Walmart film scrap”, the clear cling wrap used to secure huge pallets of products sold by Walmart. (Walmart declined to comment on the issue.) These cargo ships join dozens of others from the UK and other European countries.
Their arrival is closely watched by Turkey’s scrap pickers, who number in the hundreds of thousands and travel the streets collecting scraps from houses and businesses to resell to factories for manufacturing into products such as plastic bags.
Men search through pieces suitable for recycling at the municipal garbage dump in the south-eastern city of Diyarbakir, Turkey.
Now, the scrap pickers say, the factories are buying cheaper and cleaner plastic from the foreign recycling coming in on ships. Piles of their unsold, locally collected plastic are building up in urban storage yards. They have organized a campaign to stop the flood of foreign plastic, getting friends who work in the port to take videos of materials being offloaded and conducting their own ad hoc investigations.
“There are 500,000 street collectors in Turkey, working almost like ants to collect the waste,” said Baran Bozoğlu, head of Turkey’s Chamber of Environmental Engineers. Yet he said the “uncontrolled and unlimited” import of foreign recycling was leaving these local recyclers without markets for the scrap they collect. “It’s like we have flour and water and, instead of making our own bread, we import bread from abroad! Does that make any sense to you?’ Every day, Eser Çağlayan, 33, wheels his giant white collecting bag through a booming business district along the shores of the Bosphorus strait , hunting for treasures that people throw out, along with the usual plastic and paper scraps. In the past, Çağlayan, a 20-year-veteran of the scrap-picking trade, was able to feed his family of five with the $800 or so he made every month. But this year, he said, his income was down by about a third due to the competition from cheap, imported recycling.
‘‘I want to tell people in US this: recycle in your own yard,” he said. “Don’t bring down our income and put us all in danger of hunger.’’ Containers filled with plastic waste are seen before being sent back to their countries of origin in Port Klang, Malaysia.
How people are fighting for change The environmental and social ramifications of America’s plastic exports are shocking even to those in the industry. Bob Wenzlau is considered one of the founding fathers of the US curbside recycling system, having helped to launch the program in Palo Alto, California, in 1976.
Curbside recycling “was started with a really good intention; I used to feel so proud,” said Wenzlau. Now, after learning of the effects the nation’s exports are having overseas, he said, “my heart aches, because the system is doing harm”.
Wenzlau recently convinced the Palo Alto city council to pass a measure requiring the city’s recyclers to report on the social and environmental consequences of any recycling that goes to foreign countries.
Even in San Francisco, long hailed for the high percentage of waste it is able to recycle, the head of the city’s waste disposal provider has said that the system is failing.
“The simple fact is, there is just too much plastic – and too many different types of plastics – being produced; and there exist few, if any, viable end markets for the material,” Michael J Sangiacomo of Recology recently wrote in an op-ed.
A study released this spring by the environmental group Gaia documented the human toll of US plastics exports on the countries that receive them.
“The impact of the shift in plastic trade to south-east Asian countries has been staggering – contaminated water supplies, crop death, respiratory illness from exposure to burning plastic, and the rise of organized crime abound in areas most exposed to the flood of new imports,” the report found.
“These countries and their people are shouldering the economic, social and environmental costs of that pollution, possibly for generations to come.” For many experts, the most frightening example of how an out-of-control recycling industry can overwhelm a country is Malaysia. Immediately following the China ban, it became the go-to destination for US plastic and is still paying the price.
In the first 10 months of 2018, the US exported 192,000 metric tons of plastic waste to Malaysia for recycling. Some of the factories had licenses to process foreign waste. Some only had licenses to deal with Malaysian plastic waste but secretly processed foreign waste. Often, such “processing” actually meant illegally burning plastic, with the toxic fumes inhaled by Malaysians living near unlicensed factories and dump sites.
Containers filled with plastic waste are seen before being sent back to their countries of origin in Port Klang, Malaysia.
In October, the Malaysian government announced plans to immediately stop issuing new permits for importing plastic waste, and to end all plastic waste importing within three years. Even so, thousands of tons of junk plastic remain heaped on the landscape, left behind by unscrupulous business operations.
On the outskirts of Jenjarom, a town in the district of Kuala Langat, where local authorities shut down 34 illegal factories last July, a land manager struggled to get rid of 10ft-high piles of plastic left under a corrugated roof by illegal importers of foreign waste. Nearby, a huge field of foreign plastic had been abandoned by the former renters: Chinese illegal factory owners, who left without warning following the crackdown.
And the illegal importation of US waste is continuing. According to the environmental minister Yeo Bee Yin’s accounts to the local press, many shippers simply change the codes on the documentation for their cargo containers to make it look like they are sending virgin plastic, which isn’t regulated, instead of the same old recycling scrap.
CK Lee, a lawyer and activist, with burned plastic residue in Kuala Langat.
The continued arrival of foreign plastics is no surprise to Pang Song Lim, a 44-year-old civil engineer who lives in Sungai Petani, a town of half a million in the north-west state of Kedah. Officials say there may be 20 illegal plastic-processing factories there. Every evening at sunset, Lim prepares his house and his nose for the onslaught from the burning of foreign plastic waste nearby. Foul smoke engulfs homes and a local school.
“It’s normally after eight o’clock,” Lim said. “Burned plastic … acidic … it hurts my chest. I try to seal my windows and block under the door with carpet.” “You wake up at midnight because of the smell,” said Christina Lai, a Sungai Petani activist. “One day this land will be taken over by rubbish and not humans.” Lead reporter: Erin McCormick (Oakland, California and Monterrey, Mexico) Reporters: Bennett Murray (Hanoi, Vietnam), Leonie Kijewski (Phnom Penh, Cambodia), Carmela Fonbuena (Manila, Philippines), Gökçe Saraçoğlu (Istanbul, Turkey), Jamie Fullerton (Jenjarom and Sungai Petani, Malaysia), Febriana Firdaus (Jakarta, Indonesia), Kimberley Brown (Quito, Ecuador), Kwasi Gyamfi Asiedu (Accra, Ghana), Redwan Ahmed (Dhaka, Bangladesh).
Editors: Alastair Gee, Charlotte Simmonds Copy editor: Matthew Cantor Graphics: Heather Jones/MSJONESNYC Special thanks to Jan Dell and Claire Arkin Explore more on these topics Best of 2019 Recycling Plastics Turkey Cambodia Vietnam Philippines features More on this story More on this story Cambodia PM Hun Sen steps down and hands over power to son 26 Jul 2023 Certain of election victory, Cambodia’s Hun Sen prepares to hand power to son 21 Jul 2023 Cambodia’s only major opposition party is barred from running in July elections 16 May 2023 US condemns ‘fabricated’ case as Cambodian opposition leader is jailed for 27 years 3 Mar 2023 WHO says avian flu cases in humans ‘worrying’ after girl’s death in Cambodia 24 Feb 2023 Stolen trove of Angkor crown jewels returned to Cambodia after resurfacing in London 21 Feb 2023 Dictator Hun Sen shuts down Cambodia’s VOD broadcaster 13 Feb 2023 Death toll rises in Cambodia casino hotel fire 29 Dec 2022 Cambodian wildlife official among eight charged in US with smuggling endangered monkeys 17 Nov 2022 Cambodia’s modern slavery nightmare: the human trafficking crisis overlooked by authorities 2 Nov 2022 Most viewed Most viewed World Europe US Americas Asia Australia Middle East Africa Inequality Global development News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
249 | 2,022 | "How to Fix American Higher Ed - The Atlantic" | "https://www.theatlantic.com/ideas/archive/2022/05/student-loans-forgiveness-higher-ed/639438" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
How to Really Fix Higher Ed Rather than wiping the slate clean on student debt, Washington should take a hard look at reforming a broken system.
A merican higher education is the envy of the world, and it’s also failing our students on a massive scale. How can both be true simultaneously? Our decentralized, competitive system of research institutions is a national treasure, unparalleled in human history. We have the best universities, best professors, and best systems of discovery, and we attract the best talent. But the American educational system leaves many high-school graduates woefully unprepared for work or for life, whether or not they go to college. We leave behind more souls than we uplift.
Most young Americans never earn a college degree, and far too many of those who do are poorly served by sclerotic institutions that offer regularly overpriced degrees producing too little life transformation, too little knowledge transmission, and too little pragmatic, real-world value. Well-meaning and incredibly gifted members of faculties, administrations, and boards of trustees genuinely want to help students move up the ladder, but the current incentives don’t encourage the kind of programmatic innovation and pluralism that can help poor and middle-class Americans build a sufficiently durable foundation.
Read: Here’s how higher education dies Decades into a digital revolution that will make lifelong work in any single sector rare, we need dynamism—not status quo–ism—in higher education. In our knowledge-intensive economy, we will need an ever-expanding, highly educated workforce. As important, we will need a broader base of wise, gritty learners. We cannot build what we need if we assume that the developmental experience of every 20-year-old will be the same.
We must build a university network that enhances social mobility, instead of reinforcing privilege. We need higher education to transform more lives by offering more accountability, more experimentation, more institutional diversity, more intellectual curiosity, more adaptive learning, and more degrees and certifications. We need a rethink, renewal, and expansion—tinkering around the edges won’t cut it.
Sadly, Washington is getting ready to subsidize failure. A mega-bailout in the form of student-debt forgiveness would prop up and excuse the broken parts of this system—missing the opportunity to go bigger and help college-age Americans from every class and community learn skills, enhance persistence, find work, and embrace the dynamic opportunities of the coming quarter century. Massive forgiveness of student debt would most help upper-class Americans who are going to be just fine without a bailout. It’s a regressive mistake.
Only about one in eight Americans carries student-loan debt; of the $1.6 trillion or so of debt that students have racked up, 56 percent is held by white-collar workers with advanced degrees. About one-third is owed by the wealthiest 20 percent of households, and nearly two-fifths was acquired in pursuit of graduate credentials. The fact is, the typical student-debt holder is more likely to be white, is more educated, and has more earning potential than the median American.
Washington’s debt conversation blurs the rather obvious distinction between doctors and dropouts. There are at least three kinds of debt: debt for specialized degrees that generally lead to high-paying careers, in fields such as law and medicine; debt for post-college education, such as a master’s degree in public policy; and debt for undergraduate courses, some of which lead to degrees and some of which lead to dropping out. Most doctors and lawyers are going to be able to pay off their loans just fine, and graduate students made the adult decision to assume debt. We need to think about the third group—and the system that encourages students to take on so much debt at such a young age with such an uncertain payoff. Rather than wiping the slate clean and repeating the same mistakes, Washington should take a hard look at reforming a broken system. The current debate is a missed opportunity.
A student-debt bailout rewards wealthy kids at the expense of middle-class families, but even more destructively, it perpetuates the lie that our current pedagogical arrangements are sufficient. We should instead admit our underperformance and find ways to introduce alternative approaches—overhauling everything from the credit-hour system to the accrediting cartels; developing new financial models that reconsider the standardization of prices and four-year degrees; experimenting with payment as a portion of future earnings rather than forcing students to take on debt on the first day of registration. To help those Americans who most need a hand, we need to tie public expenditures more tightly to student outcomes. Now is the time to build.
F ar too often , higher education equates value with exclusivity, and not with outcomes. The paradigmatic schools that dominate higher-ed discussions in the pages of The New York Times , The Wall Street Journal , and The Washington Post measure themselves by how many high-school seniors they reject, rather than by how many they successfully launch, by how much they bolster the moral and intellectual development of the underprivileged, or even by a crude utilitarian calculus such as the average earnings of their recent graduates. Elite schools compete largely to attract greater numbers of applications and then to reject larger shares of those prospective students. Rejection rates north of 90 percent are seen as hallmarks of “excellence.” The “value” of an education in this decadent system is measured before a student registers for her first class, whether the course is meaningful or not.
Daniel Markovits: How college became a ruthless competition divorced from learning Exclusion-based ranking treats education like a luxury good and sells four-year degrees like Louis Vuitton handbags. They’re valuable because they’re expensive and exclusive. Our most desirable universities build ivory towers on top of pedestals surrounded by fences marked keep out.
The famed Harvard Business School professor Clay Christensen argued before his death in 2020 that much of what is wrong with higher education lies in our political class’s fetishizing of the Ivy League, and the consequent status-chasing of so many “almost Ivies” in pursuing activities that help in rankings but do little for students or social mobility. Too many policy makers, thought leaders, and donors assume that most college experiences are like an Ivy League experience. The data tell a different story.
Thirty-one million people in this country are between the ages of 18 and 24. Thirteen million of them are current undergraduates; almost three-quarters of those are enrolled in four-year-degree programs. By contrast, 63,000 kids are enrolled in Ivy League undergraduate programs—that’s 0.2 percent of the 18-to-24-year-old population. Even if we add in all the undergraduates at the two dozen other Ivy-like institutions, we are still below 1 percent of the age cohort—yet this tiny subset of the population dominates the imagination of administrators, journalists, and lawmakers. Here’s the thing: Like the doctors and lawyers who pay off their debts, these kids are going to be just fine after graduation, tapping the networks of contacts they’ve acquired. Reform should be aimed at improving the experience of non-Ivy students, whether they’re enrolled in traditional four-year programs or not.
T he biggest problem facing most young Americans isn’t student debt; it’s that our society has lost sight of the shared goal of offering them a meaningful, opportunity-filled future with or without college. We’ve lost the confidence that a nation this big and broad can offer different kinds of institutional arrangements, suited to different needs. What we say we want for Americans entering adulthood and what we actually offer them are disastrously mismatched. Debt forgiveness would not just be regressive; it would be recalcitrant. A massive bailout would increase the cost of education and stifle the kind of renaissance higher ed desperately needs.
Debt forgiveness would pour gasoline on the bonfire of education costs. According to the Education Data Initiative, “ the average cost of college tuition and fees at public 4-year institutions has climbed 179.2% over the last 20 years for an average annual increase of 9.0%.” (For comparison, personal health-care costs —another disproportionately inflationary sector—have increased 58 percent over the same period.) The universities that take in federal dollars without useful tools to measure student outcomes have had too little motivation to resist price hikes. Meanwhile, students are taking out huge loans at artificially suppressed interest rates without considering whether their degree will justify the debt. Right now, there aren’t many guardrails against inflation on the supply or demand sides.
Jerusalem Demsas: Who really benefits from student-loan forgiveness? The debt conversation is dominated by demagoguery. Politicians who want an easy fix are keen on debt forgiveness. The talking points sound great. Everyone wants to seem compassionate and generous. The truth is more complicated. But at a minimum, if the system is so broken that Washington is debating a trillion-dollar-plus bailout, isn’t it worth reforming the system so this doesn’t immediately happen again? To ignore root causes is like cleaning up a polluted beach downstream, while leaving the factory upstream pumping ever more contaminants into the water. We need to be talking about institutional reform.
C onversation about education reform shouldn’t sound like grumpy old men grumbling about students choosing to be history majors. The liberal arts inarguably make this world a better place. More students should be intellectually curious about history, literature, and ethics. But technical training and acquiring credentials for the job market have a place as well. There’s no reason trade schools need to fight the liberal arts in a zero-sum game. We need to think through how we create more and better of both opportunities.
The world is changing, and we need to promote life-long learning and institutions that can provide it. We need far, far more Americans to fall in love with education, theoretical and practical. That means we need more occasions to learn, more entry points. That’s not going to happen without more experimentation.
Programs offering bachelor’s degrees are stuck in a predictable mold: Most classes are between three and four credit hours; each semester’s load is between 12 and 18 credit hours; each semester’s length is 15 weeks; each year is two semesters; four years makes a degree. In an economy and culture as dynamic as ours, this much standardization makes little sense. Not every 18-year-old is going to college full-time for four years (actually 5.5 years at many “four-year schools,” but we’ll set that ugly fact aside for now). Few students are taking classes at 8 a.m. on Monday—and fewer still are taking Friday classes. Not everyone is going to do eight semesters in a row. Our ossified, one-size-fits-all approach isn’t working for the majority of current students—let alone for the potential students sitting on the sidelines.
Richard Arum and Josipa Roksa’s landmark 2011 study on college outcomes, Academically Adrift , tested 2,300 college students on what they learned in college. After freshman and sophomore years, 45 percent demonstrated essentially no learning improvements; after four years, 36 percent of students still demonstrated no improvements in key areas, including writing and critical thinking. Despite these embarrassing results, reform didn’t come.
Lifelong learning needs to help people move in and out of the classroom. We need people to be able to move in and out of school and the military, in and out of school and the Peace Corps, in and out of school and religious missions, in and out of school and manual labor.
We need dozens of new models that allow students to move from the world of real work into the classroom and back, and forth, again and again. Some students should still immerse themselves in college, using a traditional eight-semester model. Some students will thrive if they work and learn at the same time. Some students will choose to travel and come back to school, or to learn on the road. Some students will opt for project-driven approaches that yield a marketable credential.
Most colleges today underinvest in student advising and mentoring, and in intensive internships and career development. Our standard testing practices encourage mindless cramming and dumping, rather than critical engagement. All students would benefit from more frequent, low-stakes, real-time, individualized assessments. A thriving system will cultivate a student’s self-awareness about different learning styles and help them develop a style that works for them.
Why can’t we have more travel options, more service options, more intensive internships, more work opportunities? A wise fifth-grade math teacher knows that more student curiosity is awakened by story problems and riddles than by opening class on day one with a mathematical theory. So, too, a pedagogically aware teacher of 19-year-olds realizes that a Socratically alive student usually begins with a genuine question, rather than with the professor’s declared truth. This happens more often via real-world struggle than via voice-of-God content bellowed from the “sage on the stage.” Not every course should have three to five weekly hours in class. Not every semester should have 15 weeks, nor every program eight semesters. Most simply: Not every major should have the same basic calendar building blocks that the accreditation bureaucracies inflexibly demand.
Our monolithic system lacks incentives to empower social entrepreneurs to spark intellectual curiosity. We would likely be better off if we conceived of higher education as three staggered 12-to-18-month periods of learning and work, rather than a single four- or five-year attempt.
T he Higher Education Act hasn’t been reauthorized since 2008, just a year after the first iPhone came out and started a new era of mobile-information consumption. It’s time for an update that recognizes the modern realities of education and workforce demands. To do that we need reforms in two broad categories: institutions and money. First, how do we increase the kinds of institutional and programmatic opportunities that awaken students to lifelong learning ? Second, how do we reform public investment and promote alternative funding models for this new, more diverse ecosystem? Read: The commodification of higher education No single idea will cut it.
More is the key: more flexibility, more schools, more pricing models, more degrees, more openness to innovation. In private conversations, even current university presidents often desire more programmatic flexibility and innovation, but believe they can’t make many first moves alone. Here is a partial list of steps we can take together to empower them—and other as-yet-unknown innovators.
End the tyranny of four-year degrees.
Just one in four college-goers is a dependent, full-time student, working fewer than 16 paid hours a week. Different institutions serve different constituencies, so different schools should be competing for different students with different goals. That so many schools are designed on a single model while serving students who have very different needs and desires is a big part of why so many colleges are financially shipwrecked, and why the students who attend them too often end up the same.
Ditch the credit hour.
Education is measured in credit hours, a relic of the industrial economy of the early 20th century. Credit hours tell us little about what students have learned or how much they’ve grown, only how long butts have been in classroom seats. That might have been moderately adequate for the early and mid-1900s, but the model is not well suited to an age filled with the promise of individually tailored instruction. As a history professor, I saw more life change in 15-person seminars than in 200-person lecture classes, but it doesn’t necessarily follow that intimacy beats scale in every discipline. In much math pedagogy, neither 15:1 nor 200:1 student classes are ideal; rather, an infinity-to-one online delivery system augmented by 1:1 and 3:1 breakout tutorials might propel more learners forward faster.
Rethink metrics for teaching and learning.
Technology allows individualized programs to guide students, focusing high-touch professor time on yet-to-be-mastered complex material. Students can move at a pace that isn’t available in a traditional, exclusively synchronous classroom. And teachers can gain greater flexibility and adaptability, paired with more rigorous and more transparent accountability.
Encourage corporate-led certification programs.
Programs led by the private sector that offer students easily transferable skills or guaranteed employment after graduating (for example, Walmart’s Live Better U and Google’s Career Certificates) are more economical and more secure for some students than a traditional diploma. Federal policy should reward providers that create high-quality alternative pathways for acquiring solid skills and secure jobs, even if they are not traditional institutions. Because we’re measuring knowledge rather than credits, we should think about new kinds of certification, too, including stackable micro-certifications that people can carry with them as they move between jobs and locations.
Each of these changes will depend on breaking up the accreditation cartels. College presidents tell me that the accrediting system, which theoretically aims to ensure quality and to prevent scammers from tapping into federal education dollars, actually stifles programmatic innovation inside extant colleges and universities aiming to serve struggling and underprepared students in new ways. In health care, it has helped create a critical shortage of trained clinical staff at all levels. Higher-education leaders want greater flexibility to experiment and grow. Much of the monotony in higher education is a result of the accreditation process. Accreditation should protect students from snake-oil salesmen, but unfortunately it has become its own racket. Existing schools try to lock out potential competitors. Timidity, ideological homogeneity, and red tape are all structurally encouraged by the accreditation processes. We must demand radical reform—or even the full breakup of the system. Regional agencies, private associations, and approaches unburdened by red tape can measure quality and protect student interests—so long as radical transparency is required.
J ust as there’s no single model for institutional reform, there’s no single funding solution. We need more ways to promote net-price transparency, more ways to target funding, and more ways to tie funding to outcomes. Here are a few places to start.
Target funding to better help students.
Start with means-testing grants and loans. The federal government’s large-scale intervention in higher-ed funding has gone hand in hand with reckless loan practices. Loans given to students to attend schools that offer little to no return on investment, to poor families (through parent PLUS loans) who have limited ability to repay, or to graduate students (through grad PLUS loans) who pursue expensive and unremunerative graduate or professional degrees are a scandal. The system tells high-school students with absolute certainty that a college degree is their golden ticket, it pushes them to take on massive debt, and then it turns a cold shoulder when they drop out or graduate with undervalued degrees. For kids who weren’t prepared for college, it’s downright predatory. Grants and loans need to be tied to realistic assessments of a student’s projected ability to pay them back. The money ought to be limited to true educational expenses—all public money should fund learning, not subsidize high-end living accommodations off campus. The federal government’s careless loan practices sound compassionate, but they impoverish many people who would have been better off without so much debt.
Align government policies to encourage experimentation.
Washington isn’t fast enough or flexible enough to solve this many problems on its own. States have a giant role to play. We could increase federal aid to states that meet outcomes-based criteria. Affordable pricing and measurable student success should generate increased federal funding. We’ve seen state programs like Georgia’s Helping Outstanding Pupils Educationally and Zell Miller Scholarships tie state investment to improving academic performance. Similarly, the Texas State Technical College system has worked to align funding with earnings outcomes. The Cicero Institute looked at technical colleges in Texas, which receive additional funding for each student who holds a good job in the first five years after graduation. After the changes went into effect, the starting earnings for new graduates increased by 61 percent.
Make higher-ed institutions put more skin in the game.
It’s worth considering better mechanisms for future income-sharing arrangements between students and universities. Right now, schools don’t gain much when students succeed, and they remain too insulated when debt-loaded students fail. At most universities, your personal success matters to altruistic professors and mentors, but it doesn’t matter much to the billing department or the bottom line—schools just need the tuition money to flow. Students and their colleges should have a shared, long-term interest in students’ success.
Differentiate prices by field of study.
Presently, different majors at the same school are priced the same, even though some place embarrassingly few demands on students. Different majors generate widely divergent labor-market outcomes, and so provide varied returns on students’ investment of money and time. Students should have access to more of this information at the front end. Like the rest of the proposals here, there are unintended consequences to be avoided, but it’s a debate worth having. Different products and services have different cost structures, and some loans are riskier than others. We should reflect that basic reality by making prices transparent and segmenting different fields of study. Today’s lack of price and outcome transparency encourages students to take on large loan burdens in pursuit of unremunerative degrees. (One study found that 28 percent of bachelor’s degrees programs do not have even a mildly positive net return on investment.) Ditching obsolete pricing models doesn’t mean we have to let students sink or swim on their own.
These are a few starter models. We need many more.
Ov er the past several decades, our sights have narrowed and our system has atrophied. But now we find ourselves in the middle of an exciting and transformative era. America needs a resilient, high-octane workforce of lifelong learners. We need to build and rebuild the schools and programs to help them succeed. This is what America has always been about—looking ahead, founding institutions, and solving problems.
Amnesties and bailouts are not solutions. We need to think bigger, and to reconsider the different forms that American success might take. We need an economy that rewards lots of skills, not just those prized by a narrow upper tier.
There are far too few innovators, too few institutions, too few models, and too few programs to meet the full range of needs. More schools—including those yet to be created—should compete to change the lives of their students, rather than compete to reject more applicants. We need more degrees, more liberal-arts programs, and more technical certifications. We need more nontraditional students and more nontraditional school years. And we need to reinvigorate the imagination and the energy to design and pave new pathways to success for every American with the appetite to go after it.
Debt forgiveness misses the moment by rewarding and reinforcing a broken system. Future generations of Americans deserve access to a higher-education sector that works. We owe it to our kids and grandkids to build something far better. Success—theirs and ours—depends on it.
" |
250 | 2,023 | "A Chatbot Beat the SAT. What Now? - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/03/open-ai-gpt4-standardized-tests-sat-ap-exams/673458" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce Is This the Singularity for Standardized Tests? GPT-4’s mastery of the SAT will re-entrench the power and influence of rote exams.
Last fall, when generative AI abruptly started turning out competent high-school- and college-level writing, some educators saw it as an opportunity. Perhaps it was time, at last, to dispose of the five-paragraph essay , among other bad teaching practices that have lingered for generations. Universities and colleges convened emergency town halls before winter terms began to discuss how large language models might reshape their work, for better and worse.
But just as quickly, most of those efforts evaporated into the reality of normal life. Educators and administrators have so many problems to address even before AI enters the picture; the prospect of utterly redesigning writing education and assessment felt impossible. Worthwhile, but maybe later. Then, with last week’s arrival of GPT-4, came another provocation. OpenAI, the company that created the new software, put out a paper touting its capacities. Among them: taking tests. AIs are no longer just producing passable five-paragraph essays. Now they’re excelling at the SAT, “earning” a score of 1410. They’re getting passing grades on more than a dozen different AP exams. They’re doing well enough on bar exams to be licensed as lawyers.
It would be nice if this news inspired educators, governments, certification agencies, and other groups to rethink what these tests really mean—or even to reinvent them altogether. Alas, as was the case for rote-essay writing, whatever appetite for change the shock inspires might prove to be short-lived. GPT-4’s achievements help reveal the underlying problem: Americans love standardized tests as much as we hate them—and we’re unlikely to let them go even if doing so would be in our best interest.
Many of the initial responses to GPT-4’s exam prowess were predictably immoderate: AI can keep up with human lawyers , or apply to Stanford , or make “education” useless.
But why should it be startling in the slightest that software trained on the entire text of the internet performs well on standardized exams? AI can instantly run what amounts to an open-book test on any subject through statistical analysis and regression. Indeed, that anyone is surprised at all by this success suggests that people tend to get confused about what it means when computers prove effective at human activities.
Read: The college essay is dead Back in the late 1990s, nobody thought a computer could ever beat a human at Go, the ancient Chinese game played with black and white stones. Chess had been mastered by supercomputers, but Go remained—at least in the hearts of its players—immune to computation. They were wrong. Two decades later, DeepMind’s AlphaGo was regularly beating Go masters. To accomplish this task, AlphaGo initially mimicked human players’ moves before running innumerable games against itself to find new strategies. The victory was construed by some as evidence that computers could overtake people at complex tasks previously thought to be uniquely human.
By rights, GPT-4’s skill at the SAT should be taken as the opposite. Standardized tests feel inhuman from the start: You, a distinct individual, are forced to perform in a manner that can be judged by a machine, and then compared with that of many other individuals. Yet last week’s announcement—of the 1410 score, the AP exams, and so on—gave rise to an unease similar to that produced by AlphaGo.
Perhaps we’re anxious not that computers will strip us of humanity, but that machines will reveal the vanity of our human concerns. The experience of reasoning about your next set of moves in Go, as a human player doing so from the vantage point of human culture, cannot be replaced or reproduced by a Go-playing machine—unless the only point of Go were to prove that Go can be mastered, rather than played. Such cultural values do exist: The designation of chess grand masters and Go 9-dan professionals suggests expertise in excess of mere performance in a folk game. The best players of chess and Go are sometimes seen as smart in a general sense, because they are good at a game that takes smarts of a certain sort. The same is true for AIs that play (and win) these games.
Read: A machine crushed us at Pokémon Standardized tests occupy a similar cultural role. They were conceived to assess and communicate general performance on a subject such as math or reading. Whether and how they ever managed to do that is up for debate, but the accuracy and fairness of the exams became less important than their social function. To score a 1410 on the SAT says something about your capacities and prospects—maybe you can get into Stanford. To pursue and then emerge victorious against a battery of AP tests suggests general ability warranting accelerated progress in college. (That victory doesn’t necessarily provide that acceleration only emphasizes the seduction of its symbolism.) The bar exam measures—one hopes—someone’s subject-matter proficiency, but doesn’t promise to ensure lawyerly effectiveness or even competence. To perform well on a standardized test indicates potential to perform well at some real future activity, but it has also come to have some value in itself, as a marker of success at taking tests.
That value was already being questioned, machine intelligence aside. Standardized tests have long been scrutinized for contributing to discrimination against minority and low-income students. The coronavirus pandemic, and its disruptions to educational opportunity, intensified those concerns. Many colleges and universities made the SAT and ACT optional for admissions. Graduate schools are giving up on the GRE , and aspiring law students may no longer have to take the LSAT in a couple of years.
GPT-4’s purported prowess at these tests shows how little progress has been made at decoupling appearance from reality in the tests’ pursuit. Standardized tests might fairly assess human capacity, or they might do so unfairly, but either way, they hold an outsize role in Americans’ conception of themselves and their communities. We’re nervous that tests might turn us into computers, but also that computers might reveal the conceit of valuing tests so much in the first place.
AI-based chess and Go computers didn’t obsolesce play by people, but they did change human-training practices.
Large language models may do the same for taking the SAT and other standardized exams, and evolve into a fancy form of test prep. In that case, they could end up helping those who would already have done well enough to score even higher. Or perhaps they will become the basis for a low-cost alternative that puts such training in the hands of everyone—a reversal of examination inequity, and a democratization of vanity. No matter the case, the standardized tests will persist, only now the chatbots have to take them too.
" |
251 | 2,021 | "Happy Is an Elephant. Is She Also a Person? - The Atlantic" | "https://www.theatlantic.com/ideas/archive/2021/11/happy-elephant-bronx-zoo-nhrp-lawsuit/620672" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Elephant Who Could Be a Person The most important animal-rights case of the 21st century revolves around an unlikely subject.
Happy at the Bronx Zoo T he subject of the most important animal-rights case of the 21st century was born in Thailand during the Vietnam War. Very soon after that, a tousle-haired baby, she became trapped in human history. She was captured, locked in a cage, trucked to the coast, and loaded onto a roaring 747 that soared across the Pacific until it made landfall in the United States. She spent her earliest years in Florida, not far from Disney World, before she was shipped to Texas. In 1977, when she was 5 or 6, more men hauled her onto another truck and shipped her to New York, to a spot about four miles north of Yankee Stadium: the Bronx Zoo. In the wild, barely weaned, she’d have been living with her family—her sisters, her cousins, her aunts, and her mother—touching and nuzzling and rubbing and smelling and calling to each other almost constantly. Instead, after she landed at the zoo and for years after, she gave rides to the schoolchildren of New York and performed tricks, sometimes wearing a blue-and-black polka-dotted dress. Today, in her 50s and retired, she lives alone in a one-acre enclosure in a bleak, bamboo-shrouded Bronx Zoo exhibit called, without irony, “Wild Asia.” This fall, on a day nearly barren of tourists, I rode through Wild Asia on a mostly empty monorail, the Bengali Express, over the Bronx River. “You’ll have no trouble spotting the next animal on our tour, the largest land mammal,” the tour guide said, dutifully reciting a script. “The lovely lady we’re meeting right here, her name is Miss Happy.” A few yards away, behind a fence of steel posts and cables enclosing a small pond, a stretch of grass, and a patch of compacted dirt—an exhibit originally named the “Khao Yai,” after Thailand’s first national park—Miss Happy stood nearly still and stared, slightly swaying, as she lifted and lowered one foot. Miss Happy has managed “to keep her wonderful figure in shape,” the guide said, as if she were describing a vain, middle-aged woman, and the zoo takes “very, very good care” of her: She receives “weekly pedicures and baths,” she said, as if this were an indulgence, the zoo a spa. The script did not mention that the pedicures are necessary to help prevent crippling and even fatal foot disease, a common consequence of captivity, since, in the wild, these animals, traveling in families, often walk many miles a day.
I rode the monorail again. Happy stood and swayed and stared and lifted and lowered her foot. Next year, maybe as soon as January, the New York Court of Appeals will hear oral arguments regarding a petition of habeas corpus that alleges that Happy’s detention is unlawful because, under U.S. law, she is a person. She is also an elephant.
A “person” is something of a legal fiction. Under U.S. law, a corporation can be a person. So can a ship. “So it should be as respects valleys, alpine meadows, rivers, lakes, estuaries, beaches, ridges, groves of trees, swampland, or even air,” Justice William O. Douglas wrote in a dissenting Supreme Court opinion in 1972. Pro-life activists have argued that embryos and fetuses are persons. In 2019, the Yurok tribe in Northern California decreed that the Klamath River is a person. Some forms of artificial intelligence might one day become persons.
But can an elephant be a person? No case like this has ever reached so high a court, anywhere in the English-speaking world. The elephant suit might be an edge case, but it is by no means a frivolous case. In an age of mass extinction and climate catastrophe , the questions it raises, about the relationship between humans, animals, and the natural world, concern the future of life on Earth, questions that much existing law is catastrophically ill-equipped to address.
The U.S. Constitution, written in Philadelphia in 1787, rests on a chain-of-being conception of personhood. The men who wrote the Constitution not only made no provision for animals or lakes or any part of the natural world but also made no provision for women or children. The only provision they made for Indigenous people and for Africans and their descendants held in bondage was mathematical: They calculated representation in Congress by adding up all the “free Persons,” subtracting “Indians not taxed,” and counting enslaved humans as “ three fifths of all other Persons.
” When the question was raised in Congress earlier, about whether, in that case, domesticated animals like cattle ought to count toward representation, Benjamin Franklin had offered a rule of thumb for how to tell the difference between people and animals: “Sheep will never make any insurrections.” He did not mention elephants.
Much of American history is the story of people, rights, and obligations left out of the constitutional order making their way into it, especially by constitutional amendment. The purpose of amendment, as early Americans understood it, was “to rectify the errors that will creep in through lapse of time, or alteration of situation.” Without amendment, they believed, there would be no way to effect fundamental change except by revolution: everlasting insurrection. But, like the peaceful transfer of power, the people’s ability to revise the Constitution is no longer to be relied on: Meaningful amendment became all but impossible in the 1970s, just when the environmental and animal-rights movements began to gain strength.
The Constitution has become all but unchangeable; the natural world keeps changing. The average annual temperature in Philadelphia in 1787 was 52 degrees Fahrenheit. In 2020, it was 58. Last year, the World Wildlife Foundation reported that wildlife populations around the globe have declined sharply in the past half century, with the species it monitors falling by an average of two-thirds. “We are wrecking our world,” a head of the foundation said. Most of the latest extinctions are due not to climate change but to habitat loss. Meanwhile, in the violence of human conquest of animal territory and the atrocities of factory farming, diseases cross from animals to humans and back again. Nearly 5 million people have so far died of COVID-19, which will not be the last zoonotic pandemic. Humans, having destroyed the habitat of many of the world’s other species, are now destroying their own.
New federal and international laws could help, but Congress barely functions and most environmental treaties are either nonbinding or not enforced and, in any event, the United States is not party to many of them, having largely withdrawn from the world. With so many legal, political, and constitutional avenues closed, the most promising strategy, influenced by Indigenous law, has been to establish the “rights of nature.” One such approach relies on property law. Karen Bradshaw, a law professor at Arizona State University, argues that wildlife such as bison and elephants have ancestral lands, and that they use, mark, and protect their territory. “Deer do not hire lawyers,” she writes in a new book, Wildlife as Property Owners , but if deer did hire lawyers, they’d be able to claim that, under the logic of the law of property, they should own their habitats. Another approach, the one taken on behalf of Happy by the Nonhuman Rights Project (NhRP), a sort of animal ACLU, relies on common law. It takes inspiration from abolitionists who used habeas corpus petitions to establish the personhood, and gain the freedom, of people held in bondage. Both strategies risk pitting animal-rights activists against environmentalists, two movements that have often found themselves at odds. (Environmentalists, for instance, wanted wolves in national parks, but accepted that wolves outside the park could be shot by hunters and ranchers.) This case isn’t about an elephant. It’s about the elephant in the courtroom: the place of the natural world in laws and constitutions written for humankind. In the wild, the elephant is a keystone species; if it falls, its entire ecosystem can collapse. In the courts, elephant personhood is a keystone argument, the argument on which all other animal-rights and even environmental arguments could conceivably depend. Elephants, the largest land mammal, are among the most intelligent, long lived, and sentient of nonhuman animals, and, arguably, they’re the most sympathetic. As moral agents, elephants are better than humans. They’re not quite as clever, but, as a matter of social intelligence, they’re more clever than every other animal except apes and, possibly, bottlenose dolphins, and they’re more decent than humans. They live in families; they protect their young; they grieve their dead; they don’t eat other animals, and they don’t cage, isolate, and torture them. Elephants appear to possess a theory of mind: They seem to understand themselves as individuals, with thoughts that differ from the thoughts of other creatures. They suffer, and they understand suffering.
The Bronx Zoo insists that Happy is not alone: There is one other elephant at the zoo, Patty, and although they’re kept apart, they can sometimes see and smell each other, and even touch one another’s trunks. (Patty and Happy take turns being on exhibit, and also use a small yard, off-exhibit, and each has a stall in an elephant barn.) The zoo has dismissed the case as nothing more than a cynical public-relations scheme. And certainly Happy’s plight has attracted a slew of celebrities. “Everyone knows that elephants are social animals,” Mia Farrow tweeted while the NhRP pursued a #FreeHappy campaign. “No matter how much money you make by displaying her, it’s wrong. Let Happy join other elephant friends at a sanctuary.” Nearly 1.5 million people have signed a petition calling for Happy’s release.
Happy is not Happy , read a sign carried by a little girl dressed in a gray-fleece elephant suit, during a 2019 protest held at the zoo. “We would have taken her case if she’d had a different name,” Steven Wise, head of the NhRP, told me. But the name helps. In the 1960s, the ACLU, in choosing to challenge Virginia’s miscegenation laws, selected as a test case the interracial marriage of Richard and Mildred Loving. The ACLU wanted Loving v. Virginia to be about love; the NhRP wants Happy’s case to be about happiness. It also doesn’t hurt their case that her misery comprises forms of distress that many humans, just now, understand better than they used to. In this 21st-century Planet of the Apes moment, humans have so ravaged the planet that many feel themselves caged, captive, isolated, and alone, dreading each dawn, so many humans wearing elephant suits, seeing in Happy a reflection of their own despair.
That’s not the only mirror in this story. Happy’s lawyers at the NhRP found Happy to be an attractive client for many reasons, but among them is that, in 2005, in an extraordinary experiment conducted by the cognitive ethologist Joshua Plotnik, she became the first elephant proven to recognize herself—as a self—in a mirror. This test, which only great apes, dolphins, and elephants have passed, is a measure of a species’ self-awareness, which is often linked to a capacity for empathy. But Plotnik, who runs a lab at CUNY’s Hunter College and leads a nonprofit called Think Elephants International, has reservations about the NhRP’s case and regrets the way its litigation has deployed his work. The Bronx Zoo is run by the Wildlife Conservation Society, whose mission is to conserve habitat in 14 of the world’s largest wild places, home to more than 50 percent of the planet’s diversity, and is a leader in efforts to reduce human-elephant conflict in Asia and restore elephant populations and fight poaching in Africa. (In 2016, its campaign “96 Elephants”—for the 96 elephants then killed in Africa every day—helped lead to a near-total ban on the sale of ivory in the U.S.) “Why WCS?” Plotnik asks about the NhRP’s choice of adversary. “Why target them? Why not a roadside zoo that we all agree is taking terrible care of an elephant?” Arguably, every dollar the WCS spends fighting a case involving this single captive elephant is a dollar it doesn’t spend on the preservation of habitat for millions of elephants in the wild, including the mere thousands remaining in Thailand.
“I think the Wildlife Conservation Society is great,” Wise told me. “But all we care about is our client.” Amicus briefs have been filed on Happy’s behalf by a legion of the country’s most respected lawyers, philosophers, and animal behaviorists, including Laurence Tribe, Martha Nussbaum, and the much-celebrated scientist Joyce Poole, who has studied elephants for nearly as long as Happy has been alive, and who co-directs ElephantVoices, a nonprofit research center that studies elephant communication, cognition, and social behavior. Briefs in support of the WCS, on the other hand, as Tribe pointed out to me in an email, have been filed instead by “groups with a strong economic self-interest,” such as the National Association for Biomedical Research, which claims that establishing personhood for Happy risks the future of all laboratory testing on all animals. And, as Poole observed in one of her own affidavits , none of the many highly regarded WCS scientists who study elephants in Asia and Africa has contributed an affidavit in support of the zoo’s position that Happy should remain in the Bronx.
No historians have been involved in the case. But elephants, which can live into their 70s, appear to possess not only a theory of mind but also a theory of history: They seem to understand their lives as a series of events that take place over time; they remember the past and know that it’s different from the present; they might well wonder and worry about the future. Most other nonhuman animals live in the present—so far as humans know, anyway—but elephants are, like humans, historians.
Elephants cannot write autobiographies, of course. But for a long time, the people who subscribed to a chain-of-being ranking of all creatures believed that the same applied to whole classes of humans. In 1845, after Frederick Douglass wrote his autobiography, the abolitionist Wendell Phillips wrote him, “I am glad the time has come when the ‘lions write history.’” Douglass was not a lion. “We are two distinct persons, equal persons,” he once wrote to the man who once claimed to own him, as if he were an animal. “You are a man, and so am I.” An elephant is not a man, and an elephant cannot write history. But an elephant might very well be a person, and every elephant has a history. The NhRP says that no elephant should live alone; the Bronx Zoo says this particular elephant should, because of her past: “Happy has a history of not interacting well with other elephants,” the zoo’s director, James Breheny, said in his affidavit.
What if another way to consider this case, then, is biographical? It wouldn’t answer the question of what Happy wants, but it would contain within it a tale of atrocity and slaughter, care and tenderness, loss upon loss: the unraveling and un-constituting of worlds.
A round 1970, Harry Shuster, a South African lawyer and businessman, placed an order for seven baby elephants. Shuster had earlier opened an animal park called Lion Country Safari in Florida, not far from Disney World, and was now preparing to open another one in Southern California, a $12 million “un-zoo,” a drive-through safari just off the San Diego and Santa Ana freeways, outside Irvine. He said he expected the 500-acre site to be “the next Disneyland.” In California, he got some of his elephants from Hollywood, including an Asian elephant named Mocdoc. She’d spent much of her life performing for Ringling Bros. circus until, in 1966, she became a star of the safari-themed television series Daktari.
After the show was canceled, Shuster bought her. But what Shuster really wanted were elephant babies, as adorable as Disney’s Dumbo. By one account, he paid $800, in advance, for seven calves. He planned to name them after the seven dwarfs from Walt Disney’s 1937 film Snow White and the Seven Dwarfs : Grumpy, Sleepy, Doc, Sneezy, Dopey, Bashful, and Happy.
Mirror, mirror, on the wall … To capture and transport them, Shuster very likely hired an outfit called the International Animal Exchange, although this is impossible to confirm. Details of Happy’s life are hard to come by, and harder to corroborate. Lion Country Safari was not able to locate its records from the 1970s, the International Animal Exchange declined to speak with me, and the Bronx Zoo did not respond to my request to see its files on Happy. I did, however, speak to some of Happy’s former keepers, and this account relies, too, on a wealth of documentary evidence.
The International Animal Exchange was run by a man from Michigan named Don Hunt, who’d started out with a pet store in Detroit and then starred in a nationally syndicated children’s television show called B’wana Don in Jungle-La , with his trained chimpanzee, Bongo Bailey. In 1960, Hunt used the money he made from the TV show to start the International Animal Exchange. In 1968, it provided nearly all of the hundreds of animals purchased by Busch Gardens, in Tampa. And by 1969, according to Hunt, it had grown to become the largest importer of wild animals in the world. (The company remains in family hands and chiefly provides animal transport.) Hunt’s brothers ran the business from Detroit, but Hunt lived in Kenya in a house “adorned with elephant tusks and leopard skins,” according to a 1969 Newsweek article, and kept a pet cheetah, as if he lived on the set of Daktari.
His biggest money came from supplying monkeys to laboratories, but he also did a brisk business, he told Newsweek , in “baby elephants.” He only ever caught animals to order, Newsweek reported, “never on speculation.” In Africa, Hunt did much of the capturing himself. “Giraffes have to be lassoed,” he told Newsweek.
“It has to be done quickly.” In other parts of the world, the International Animal Exchange contracted with private dealers who hired local hunters. Prices were high, Hunt told The Wall Street Journal in 1971, when the International Animal Exchange was supplying four out of every five animals imported by U.S. zoos: “Zebra, $2,000 to $2,500; giraffe, $5,000 to $6,500; small antelope, $1,000 to $4,000; rhinoceros, $7,000 to $10,000; leopard, $1,000 to $1,500; and lowland gorilla, $5,000 to $6,000.” He concentrated, he said, on the babies and young adults. Between 1969 and 1970, Hunt’s company’s gross revenue doubled.
In the wild, elephants live in matriarchal herds where all the females help raise the young. Calves spend the first few years of their lives nursing, and are virtually inseparable from their mothers. Males usually venture out on their own in their teens. But female elephants seldom leave their mothers. The seven calves that came to be known as the seven dwarfs were very young; Happy seems to have been less than a year old when she was captured. Methods of capture vary, but capturing a calf can involve killing its mother and other adults that die trying to protect it, as reported in a recent study by TRAFFIC, a conservation organization that works on the wild-animal trade. An expedition that captured many very young calves—like the one in which Happy might have been caught—might have involved the slaughter of most of a herd.
After the terror and tragedy of capture, and having been separated, forever, from their mothers, sisters, aunts, and cousins, calves were typically herded into a corral. “Catching an animal is the easy part,” Hunt said. “It’s after capture that the work starts.” They’d have been kept in the corral for a month—an adaptation period, Hunt explained. Young elephant calves would have had to learn to take milk from a bottle; older calves would have had to get used to eating not local plants but oats, corn, and soybeans. Hunt said that the “animals must become accustomed to man, too, and they must adjust to the shock of losing their freedom.” To prepare animals for travel, he set up speakers and played tapes of the noise of traffic, airplanes, and ships, over and over. The seven calves were herded into cages and flown to the U.S. Lion Country’s youngest elephants were so young that they had to be fed formula, by bottle, every four hours. According to one report, Sleepy died soon after arriving in California, and was replaced, although other sources suggest that all seven calves went instead to Florida.
“Two baby elephants came by truck on opening day,” the Los Angeles Times reported in June 1970. It’s possible that those two were Happy and Grumpy, another female, who seemed, from the start, inseparable. “They were buddies,” a former keeper told me. By opening day, the California Lion Country Safari boasted more than 800 animals, including ostriches, chimpanzees, wildebeests, gazelles, elands, impalas, giraffes, flamingos, camels, and lions. But Shuster was scrambling to add more. “Lion Country Safari, a great hotel for wild animals, is not yet filled,” the Times reported. Seven rhinos arrived only the night before the opening, even as “at least 13 cheetahs, 55 lions, 6 hippos and 80 more antelopes are still on the high seas.” Fifty zebras were “in quarantine in New Jersey.” Most of all, the public wanted “zoo babies,” which were said to “fall in love at first sight with human stepparents.” Lion Country represented the vanguard of a new era of zoo. (It also appears to have been the inspiration for Michael Crichton’s Jurassic Park.
) “You stay in your little steel cage (your car), windows up, and gawk as they gambol, cuddle in the shade, grumble over a hunk of horse bones, or take a swipe at your windshield wiper,” the Times reported. “Over on a hillock, elephants—ears flapping in the wind—play tag—you’re it—with nary a thought of the caged homo sapiens cruising by.” Terry Wolf is now retired, but he started working at Lion Country Safari in Florida in 1970 and went on to become its director of wildlife. “We had good intentions,” Wolf told me. “And it was a different time.” Flipper was on television (think of Lassie , but with a dolphin), he reminded me, and so was Grizzly Adams ( Lassie , but with a grizzly bear). Ads pitched Lion Country as the perfect family outing: It would feel like traveling to Africa or Asia, without all the hassle of passports and malaria shots. One advertisement asked, “Why not round up your pride and bring them to Lion Country today?” A billboard at the entrance read: NO TRESPASSING VIOLATORS WILL BE EATEN! When Happy left Thailand, she flew into American history. In 1971, while war raged in Vietnam, Henry Kissinger, a divorcé, took his two children, ages 10 and 12, on a trip to California. That year, Kissinger, Richard Nixon’s national security adviser, went on a top-secret mission to China. In California, an awkward Shuster, a very distracted-looking Kissinger, and the children posed for a photograph at Lion Country with “the prize baby elephant of the animal preserve.” (Shuster never had much to do with the animals, Wolf told me. “He never had so much as a goldfish for a pet.”) Shuster, in a suit and flashy tie, appears to be attempting to hold the very young Asian elephant in place while the children pet its back. It might even have been Happy, an elephant orphan, made into a plaything for the children of statesmen.
“Why should they want to escape?” Shuster once asked, about the animals in his drive-through safaris. In the wild, he said, they never had it so good. Only one elephant ever escaped from Lion Country Safari, California. An Asian elephant named Misty crashed her way out and charged toward the 405 freeway. When her keeper tried to chain her legs, she killed him. She stepped on his head and crushed his skull.
“W ant more than fries from your drive-through?” asked a television ad for Lion Country Safari, Florida. “Drive Yourself Wild!” In 1972, Lion Country put the dwarfs on exhibit in Florida, where they were tended to by a series of beautiful young women, their own Snow Whites. Was Happy happy? One keeper said the elephants were misnamed. “Grumpy should be Sleepy,” 28-year-old Linda Brockhoeft told the Fort Lauderdale News.
“Sneezy is the grumpy one.” Doc was mischievous and Dopey wasn’t necessarily daft. Brockhoeft loved Bashful best.
In Florida, the seven dwarfs lived in a petting zoo called Pets Corner, in a cement-floored, U -shaped pen, with a little lake and fountain. “Visitors could walk into the center of the U , where the elephants could walk right up to them and people could pet the elephants and touch them and the elephants would wind their trunks around them,” Carol Strong-Murphy told me. She worked at Lion Country from 1972 to 1974. It was her job to take care of the baby elephants, 10 hours a day, six days a week, feeding them and minding them and sweeping out a little barn, where each calf had a stall with its name on it, like the beds of the seven dwarfs in Snow White.
She was devoted to them, and found them ingenious. “Those elephants could do anything: untie your shoes, get into your pocket, take your keys, open any door,” she said. They could escape pretty easily, but she figured out that all they really wanted to do was get back in, to be with the rest of the herd. “If I just got the other six to the opposite side, the one would climb back in.” It took her only two days, she said, to tell them all apart. “Happy,” she said, “was just a really nice elephant.” For a few, cash-rich years, the business grew, and Shuster acquired more animals, and more elephants. In 1972, he opened a new Lion Country in Grand Prairie, Texas. But when the price of gas began to rise, Shuster started selling off his most valuable animals to raise cash. In 1974, the seven dwarfs, now likely around four years old, were separated. Shuster sold Sneezy to the Memphis Zoo, which loaned him to the Tulsa Zoo in 1977. (Sneezy is still there, but the Tulsa Zoo did not reply to my queries about his history.) Dopey and Bashful were sold from one circus to another before ending up in the George Carden Circus in 1993, under new names: Cindy and Jaz. As of this spring, Cindy was still traveling with the circus and performing. (The circus did not reply to my queries.) Doc died in a zoo in Canada in 2008. I don’t know what happened to Sleepy.
Happy and Grumpy were inseparable. “They were sweet little girls,” Terry Wolf told me. “Happy was the shy, reserved one. Grumpy was more playful, the one to steal all the treats out of your pocket.” They were three feet tall when he met them in Florida. When the other dwarfs were sold to other zoos and circuses, Happy and Grumpy were shipped, by truck, to the Lion Country Safari in Texas, another few thousand miles, to yet another Sun Belt state. At the Grand Prairie site, children could ride on boats shaped like hippopotamuses through the baby elephants’ little lake. In May of 1974, a photographer captured Grumpy (“a recent arrival”), deep in the water, reaching her trunk up to a little girl in pigtails who was riding on one of the boats. That summer, Grumpy was caught on camera reaching for a hot dog held by a 12-year-old girl.
Two years later, in September of 1976, as the gas crisis worsened, Shuster closed the Lion Country Safari in Texas. All of its animals—but not the site—were sold for $271,000 to the International Animal Exchange, which hoped to secure a lease on the land and reopen the park as the Lion Country Safari and Wild Animal Breeding Farm: They intended to use it to breed animals to stock other parks. But by January of 1977, the fate of the animals remained uncertain, and in March, the International Animal Exchange sold Happy and Grumpy to the Bronx Zoo. That same year, two young elephants, both four years old, not much younger than Happy and Grumpy, escaped from the Grand Prairie site: While they were waiting to be shipped to Japan, they had been kept inside a locked truck and, somehow, they got out. Likely, Happy and Grumpy had been locked in a truck too.
T he Nonhuman Rights Project argues that no elephant should live in solitary confinement in a one-acre enclosure. Wise is sure he knows how Happy feels and what she wants. “She is a miserable, depressed, extraordinarily lonely elephant,” he told me, and said any elephant would be, under those conditions. He blames people at the Bronx Zoo for her misery. “They certainly don’t love Happy. Why didn’t they just put her in a spectacular sanctuary?” (The NhRP had arranged that: Last year, the Elephant Sanctuary in Tennessee signed an affidavit in support of the NhRP, pledging to offer a place for Happy. But shortly afterward the sanctuary asked the NhRP not to file the affidavit and issued a statement distancing itself from the case and describing the Bronx Zoo as a “well-respected and fellow-accredited member of the Association of Zoos and Aquariums.”) Happy isn’t any elephant, Breheny, the zoo director, says. She is a particular elephant, who doesn’t get along with other elephants, and who is extremely anxious about any kind of travel and “becomes particularly distressed even by short moves within the zoo.” The NhRP’s argument that Happy would be better off in a sanctuary, Breheny says, neglects “the particular needs, wants, and temperament of any one elephant” and rests on research like the work done by Joyce Poole on elephants in the wild. Poole says her research is relevant to Happy’s condition and argues that if the zoo maintains that Happy has had a hard time getting along with other elephants and is miserable being moved, these claims are not evidence that Happy should stay where she is but are, instead, confirmation of “the zoo’s inability to meet Happy’s basic needs.” Happy is a particular elephant, but she also stands for all elephants: A test case uses an individual to make a rule about a category. And if the courts recognize her as a person, that ruling will help establish the rights of nature itself. This particular test case is also caught up in the difference between personhood, as a legal concept, and personality, as a concept in the study of animal behavior. “Personality” means consistent individual differences in the way animals behave, measurable traits, like boldness, innovation, sociality, and fear of novelty, as Joshua Plotnik explained to me. There’s no such thing as a typical elephant personality, any more than there’s such a thing as a typical human personality. Happy has a particular personality, and elephants with different personalities respond differently to different situations. Would she, at this point, thrive in a sanctuary? “The sanctuaries, lawyers, and scientists who have never met her really need to consider Happy as an individual, with unique experiences and needs,” Plotnik told me. Plotnik, who worked at the Central Park Zoo when he was in high school, knows and admires zookeepers—people animal-rights activists tend to demonize—and he trusts that Happy’s keepers want what’s best for her, even if, as he pointed out, they shouldn’t be the only people involved in figuring that out.
Another way to close the distance between “any elephant” and “this particular elephant,” though, would be to establish the history of the category to which Happy actually belongs, not elephants in the wild but elephants in the United States. Happy carries on her wide, gray back this terrible history, a fable about the brutality of modernity. It begins with the very first elephant in America. She was called, simply, “the Elephant.” The Elephant, a two-year-old female from Bengal, was shipped from Calcutta on the America in 1795 and reached Philadelphia in 1796, not long after the ratification of the U.S. Constitution. A broadside celebrated the 3,000-pound animal as “the most respectable animal in the world” whose “Intelligence … makes as near an approach to Man, as Matter can approach Spirit.” She was said to be “so tame” as to travel “loose, and has never attempted to hurt anyone.” In 1797, she traveled to Cambridge, Massachusetts, in time for Harvard’s commencement. Very soon, this seemingly most exotic of animals became a symbol of the new United States, an adopted animal ancestor.
Even before the Elephant toured the United States, Americans had been unusually interested in elephants. Benjamin Franklin collected tusks and elephant bones, knew the difference between the African and the Asian elephant, and, on “the scale of beings,” placed the oyster at the bottom and the elephant at the top. In making this assessment, he relied on knowledge of antiquity and also on English travel narratives. In 1554, John Lok, ancestor of the political philosopher, served as master of a ship voyaging to present-day Ghana; it brought back “certeyne blacke slaves”—the first enslaved Africans in London—250 elephant tusks, an elephant head, and a report of the elephant: “Of all beastes they are moste gentyll and tractable,” according to this report, “for by many sundry ways they are taught and do understand: in so much that they learne to do due honour to a king, and are of quicke sence and sharpenes of wyt.” Or, as the English clergyman Edward Topsell wrote in his Historie of Foure-Footed Beastes in 1607, “There is not any creature so capable of understanding as an Elephant.” But there was another reason Americans were interested in elephants. Franklin regretted the extinction of the American elephant, “no living elephants having been seen in any part of America.” But the bones and teeth of a so-called animal incognitum , a massive, unnamed animal, had been found along the Hudson and Ohio Rivers, starting in 1705, and when some turned up in South Carolina, enslaved Africans pointed out that they resembled the bones of African elephants. In 1784, Ezra Stiles, the president of Yale, wrote in his diary about a newfound tooth, “but whether an Elephant or Gyant, is a Question.” Thomas Jefferson set about finding a living specimen of this animal—he called it a “mammoth”—in order to answer the insult made by a French naturalist who had declared, “No American animal can be compared with the elephant.” Jefferson charged Meriwether Lewis and William Clark with finding an American elephant. “In the present interior of our continent,” he explained, “there is surely space and range enough for elephants.” Americans didn’t find any, but they did start importing them, and then began adopting the elephant as a national symbol: gigantic and wise. By 1824, one elephant held captive in the United States was named “Columbus.” In 1872, after the ratification of the Fourteenth and Fifteenth Amendments, which established that all men are “Persons,” this newly amended, reconstructed Constitution was represented in political cartoons as … an elephant. Two years later, the elephant became the symbol of the Republican Party: the immense, powerful, and intelligent might of the Union reconstructing a Confederacy of dunces, the Democratic jackass.
After the abandonment of Reconstruction, the fate of the elephant took a turn. Americans began importing elephants from Africa, most famously Jumbo, brought to New York by P. T. Barnum in 1882. Barnum bought Jumbo from the London Zoo, but he imported most of his animals from the German exotic-animal trader Carl Hagenbeck. Between 1875 and 1882, Hagenbeck claimed, he shipped about 100 elephants to the United States. Most were imported by circuses; many died in zoos, where the display of exotic animals, fettered and caged, became a feature of the age of imperialism.
The first American zoo opened in Philadelphia in 1874; on opening day, a kangaroo broke both her legs on the bars of her cage. The zoo’s first exhibits included an Asian elephant, Jennie, born wild in 1848 and bought from a circus. “There would not have been zoos in America without elephants,” the historian Daniel Bender has observed. Zoos prided themselves on exhibiting biological specimens for scientific study, but the elephants they acquired from circuses had been tortured into submission to entertain crowds by performing tricks. Captive elephants can be tamed and are often termed “domestic” because they can be trained to live peaceably in confinement, but they are not “domesticated” animals, like dogs or cows or sheep, because they have never been selectively bred. Many elephants tortured by circus trainers would, in the end, fight back. “Mad elephants,” they were declared, and were offloaded to zoos. Sometimes the rampages of bull elephants are due to musth, a period of heightened sexuality associated with aggressive behavior. But bulls aren’t the only elephants that become ungovernable, and most elephant aggression is a response to violence. An elephant named Bolivar joined Jennie at the Philadelphia Zoo from the Forepaugh Circus after he killed a spectator who’d burned him with a lit cigar.
In the 1880s, when Jennie was in her 30s, she was captured in motion when Eadweard Muybridge, with funding from the University of Pennsylvania, photographed her for his Animal Locomotion series. She lumbers along, as if free. But many elephants in the United States spent their entire lives in foot chains that evoked nothing so much as slave coffles. During the age of Jim Crow, the elephant in some meaningful ways replaced “the slave” in the American imagination, a nonhuman nonperson to be shackled, whipped, and even lynched, by daylight, in the public square. After Jennie died in 1898, her skin was tanned and turned into wallets, sold as souvenirs. Elephant insurrections were put down with elephant executions, as the historian Amy Louise Wood has chronicled. At least 36 American-owned elephants were sentenced to execution between 1880 and 1930. Many of these elephant executions, not all of which succeeded, took place in the Jim Crow South, in states that included Georgia, Texas, and South Carolina, but they happened in the North, too. Most often, elephants were notionally charged with murder—they tended to kill their keepers—and executed, vigilante style, as if they were criminals. In 1883, P. T. Barnum executed Pilot, an elephant The New York Times , in a send-up of its own crime reporting, described this way: “He had no regard for religion or morals.” In 1885, another Barnum elephant was chained to four trees in Keene, New Hampshire, and executed by firing squad in front of 2,000 spectators. In 1894, Tip, exhibited in Central Park, was indicted, “tried and convicted” for murder, and then publicly poisoned. Six-ton, 36-year-old Topsy, named after the slave child in Uncle Tom’s Cabin , “the first baby elephant to be held in captivity in the United States,” worked for a circus and killed three men in three years before being sold to a park in Coney Island, where, in 1903, she was executed; electrodes were strapped to her feet and a noose around her neck was tied to a steam engine after she had been fed carrots loaded with cyanide. The Edison Manufacturing Company electrocuted her and made a film of it, Electrocuting an Elephant.
In 1916, Sparks World Famous Shows, a traveling circus, was in Kingsport, Tennessee, when a trainer, riding on an elephant named Mary at the head of a parade leading spectators to the circus, “dealt her a blow over the head with a stick.” She grabbed him by the waist with her trunk and, according to one report, “sunk her giant tusks entirely through his body” and then trampled on him, “as if seeking a murderous triumph.” The circus’s publicist decided to stage a public execution by hanging her from a derrick provided by the Clinchfield Railroad Company. “I mean, if we have to kill her, let’s do it with style,” he said. The hanging broke her neck, but, as was reported at the time, “the apparent intelligence of the animal made her execution all the more solemn”: She tried to use her trunk “to free herself.” The NAACP requested materials about the execution for its lynching files.
T he Bronx Zoo, then known formally as the New York Zoological Park, opened in 1899, with funding from Andrew Carnegie and J. Pierpont Morgan, under the direction of William T. Hornaday. It was meant to be an answer to the cheap attraction of circuses and the brutal exploitation of circus animals. Hornaday called his zoo “the high-water mark of civilization.” Exotic animals were held captive in elegant, neoclassical, Victorian homes—the Bird House, Antelope House, Reptile House—with the buildings arranged along tree-lined streets in the form of the most fashionable Victorian suburbs. The Bronx Zoo, too, was a retreat from the city, a city teeming with immigrants, living in poverty. At its center stood a massive dome-topped limestone mansion modeled after the Antwerp Zoo’s Palais des Hippopotames: the Elephant House.
Hornaday had started out as a big-game hunter and, as he wrote in a memoir, Two Years in the Jungle , shot his first elephant in India in 1877. (The stuffed remains of another elephant he killed were displayed at Harvard until 1973.) He had gone on to become the country’s leading conservationist. Hornaday helped save the bison: He collected and grew a herd for the zoo and then shipped it west as part of one of the world’s first efforts to preserve an endangered species.
Zoological parks and national parks were two sides of the same coin. Hornaday advocated for, and Theodore Roosevelt in the White House helped deliver, the public ownership and stewardship of land in the West, especially in the form of the national parks, a conservation project that involved displacing Indigenous peoples, in violation of treaties, not by granting rights to trees or wolves but by granting the power of environmental protection to the state. As Oliver Wendell Holmes wrote in 1908, “The state, as quasi-sovereign and representative of the interests of the public, has a standing in court to protect the atmosphere, the water, and the forests within its territory.” In arguing for the protection of wildlife (generally at the expense of Native Americans), no conservationist was more fierce than Hornaday. “I will make no compromise with any of the enemies of wildlife,” he said in 1911. Two years later, he published a manifesto called Our Vanishing Wildlife.
The Sierra Club compared the book’s zeal to that of abolitionist literature “when the force of great moral convictions won the day against greed and wrong.” Young Aldo Leopold reviewed it and called it “the most convincing argument for better game protection ever written.” Hornaday believed in a still-more-elaborate and racist chain of being than did the 18th-century Framers of the U.S. Constitution. In 1906, he exhibited in his zoo an African man named Ota Benga, alongside the primates. (Benga later committed suicide.) Hornaday believed elephants to be not only the most civilized of all animals, but also more civilized than some humans, pronouncing it “as much an act of murder to wantonly take the life of a healthy elephant as to kill a native Australian or a Central-African savage.” Hornaday decried the lynching of elephants because he believed in the morality of elephants. “I know of no instance on record wherein a normal elephant with a healthy mind has been guilty of unprovoked homicide, or even of attempting it,” he wrote, attributing elephant rampages to mistreatment. “So many men have been killed by elephants in this country that of late years the idea has been steadily gaining ground that elephants are naturally ill-tempered, and vicious to a dangerous extent. Under fair conditions, nothing could be farther from the truth.” Instead, “many an elephant is at the mercy of quick-tempered and sometimes revengeful showmen, who very often do not understand the temperaments of the animals under their control, and who during the traveling season are rendered perpetually ill-tempered and vindictive by reason of overwork and insufficient sleep. With such masters as these it is no wonder that occasionally an animal rebels.” Hornaday articulated, in effect, an elephant right to revolution. It apparently did not apply to elephants at the Bronx Zoo.
In 1908, five years after Topsy was electrocuted on Coney Island, Hornaday bought an Asian elephant named Alice from Topsy’s old circus. She was loaded into a teakwood crate and hoisted onto a truck. Housed temporarily in the Antelope House, she went on a rampage while on a walk, after being spooked by a puma, and wreaked havoc in the Reptile House. As Hornaday later wrote, she was captured again, and controlled “by vigorous work with the elephant hooks” and then shackled: “We quickly tied her hind legs together,—and she was all ours. Seeing that all was clear for a fall, we joyously pushed Alice off her feet. She went over, and fell prone upon her side. In three minutes all her feet were securely anchored to trees, and we sat down upon her prostrate body.” She was forced to lie in chains.
In 1904, Hornaday bought Gunda, a male Asian elephant, born in the wild in northeast India. Like Alice, he was trained with a howdah to give rides to children. But in 1912, Gunda rebelled: He gored his keeper and was declared a mad elephant. (The keeper survived.) Hornaday ordered Gunda to be placed in chains, for two years, until his sorry state led, in 1914, to complaints that the zoo should be closed. The zoo insisted that Gunda was happy. In 1915, The New York Times ran a story with this headline: PUT DOUBLE CHAINS ON GUNDA AGAIN KEEPERS SAY HE LIKES IT “All day long the huge animal—he is nearly ten feet tall—stands swaying, moving his great body in a diagonal direction—weaving, the zoologists call it,” the Times reported. “This is all that Gunda seems to do—just stand and sway his body. He is as ceaseless as Niagara Falls.” (Swaying, or weaving, is a sign of distress.) The injured keeper and Hornaday both insisted, as the Times reported, “that the elephant is content in his chains, that he does not want to roam around his pen, and that the chains are long enough to permit him every freedom of movement, whether standing or lying down.” In the end, Hornaday had Gunda shot. His remains were fed to the lions.
A half century later, Happy and Grumpy, shipped by truck from Texas to New York, moved into the Elephant House.
T he Bronx Zoo opened its 38-acre Wild Asia exhibit in August 1977. Like Lion Country Safari, when Wild Asia first opened, it represented the very best in modern zookeeping.
The New York Times called it “probably the finest wild-animal display in the country east of the San Diego Zoo’s Wild Animal Park.” Earlier, the reporter said, visiting the Bronx Zoo had been like “visiting an inmate in a prison.” Now it was like visiting an animal in the wild. “Elephants crashed out of the forest and lumbered down the hillside to splash into the water,” the reporter gushed. “Four of them plunged in over their heads, bobbed up to spray the water from their trunks and cavorted playfully together.” Happy and Grumpy were not among those four elephants, who were Groucho, a male, and three females, named after the Andrews sisters, Maxine, Laverne, and Patty. “To immerse yourself in this great Asian heartland, no visas, inoculations or air fares are necessary,” the Times reported, echoing earlier coverage of Lion Country. “It involves no jet lag or any lag longer than needed to recuperate from a journey by subway, bus or car.” It needed only an uptown train.
But when Happy and Grumpy arrived at the Bronx Zoo, they didn’t live in Wild Asia. They lived in the Elephant House with an older Asian elephant named Tus and performed tricks and gave rides to children.
The Bronx Zoo seemed caught between two ways of thinking about elephants: In Wild Asia, they were wild animals; in the Elephant House, they were toys for tots. The infantilization of the elephant had begun in earnest with Walt Disney’s Dumbo , released in 1941. In Dumbo , Mrs. Jumbo, a circus elephant, has a baby named Jumbo Junior, presumably the son of the world-famous Jumbo, but cruelly nicknamed “Dumbo.” Defending him, Mrs. Jumbo grows violent, and, like the Bronx Zoo’s Alice, is dragged down and shackled in chains. Then, like Gunda, she’s locked up, in this case in a circus trailer over which a sign is hung, reading DANGER: MAD ELEPHANT.
(The actual Jumbo did not have any offspring, but Barnum sometimes passed off as his daughter an elephant named Columbia. In 1907, Columbia became unruly and was “condemned to death.” As a lesson to the other elephants, she was strangled “before twenty-one other elephants, including her mother.” Dumbo offered a rewriting of that story.) In the Baby Boom of the 1950s and 1960s, baby elephants became all the rage, in everything from toys to stuffed animals. In the era of postcolonial independence movements, the wild, in the American imagination, became first juvenilized, and then feminized. In 1959, the French territories in west-central Africa sent President Dwight Eisenhower the gift of a baby elephant named Dzimbo. Within the GOP, the elephant also became feminized, a symbol of the political housewife, the conservative white woman Republican activist. One senator, speaking to the National Federation of Republican Women, suggested that the elephant was the right symbol for the Republican Party because an elephant has “a vacuum cleaner in front and a rug beater behind.” The symbol became less useful in the political tumult of the 1960s. In 1968, reporters covering the Republican National Convention in Miami went out to the airport to witness the arrival, by Delta Air Lines, of a baby elephant in a tutu, which was nudged into a Hertz trailer for delivery to the convention, a gift to Nixon. Americans had begun objecting to zoos, especially after the publication, in 1968, of a Life essay by Desmond Morris called “Shame of the Naked Cage.” In 1971, activists operating undercover on behalf of the Humane Society investigated the nation’s zoos and described them as slums, another 1970s story of urbanization gone wrong. The San Diego Zoo, which opened its Wild Animal Park in 1972, answered the call to move to the suburbs. And the Bronx Zoo began planning Wild Asia.
In 1975, with the publication of Animal Liberation , a manifesto by the philosopher Peter Singer, the animal-welfare movement began to yield to the animal-rights movement, and environmental protection began to yield to environmental rights. In the U.S., most federal measures in place to protect the environment, regulate pollution, preserve endangered species and wildlife habitat, and halt climate change date to the early 1970s: the Clean Air Act, the Clean Water Act, the National Environmental Policy Act.
One of the first proposals to address environmental degradation by way of a constitutional amendment came in 1970, when Wisconsin Senator Gaylord Nelson, who also founded Earth Day, proposed an amendment to read, “Every person has the inalienable right to a decent environment. The United States and every State shall guarantee this right.” But by then, the Constitution had already become effectively unamendable. Environmental-rights proposals kept getting introduced—“The right of each person to clean and healthful air and water, and to the protection of the other natural resources of the nation, shall not be infringed upon by any person,” read one that had the support of lawmakers from 37 state legislatures—and they kept going nowhere. In the absence of any language in the Constitution regarding the environment, legislative and statutory measures are extraordinarily vulnerable: From 2017 to 2021, the Trump administration rolled back nearly 100 environment provisions. The Biden administration has made restoring these provisions, and adding more, a top priority, but all of them are reversible.
Other countries amended their constitutions. Out of 196 constitutions in the world, at least 148 now make some provision for what is called “environmental constitutionalism.” Animal constitutionalism has been following in its tracks. In 1976, in a decision attributed to the influence of Hinduism, India adopted a constitutional amendment declaring it to be the duty of every citizen “to protect and improve the natural environment including forests, lakes, rivers and wild life, and to have compassion for living creatures”—language its Supreme Court in 2014 described as “the magna carta of animal rights” in a decision in which the court defined compassion to include “concern for suffering.” In 2002, prodded by the Green Party, Germans amended their constitution’s provision about the state’s “responsibility toward future generations”—its obligations to the natural world—by adding three words: “and the animals.” In the U.S., though, with its unamendable Constitution, both environmentalists and animal-rights activists began to adopt a novel legal strategy: arguing for the rights of nature. In 1972, Christopher Stone published a law-review article called “Should Trees Have Standing?” Stone argued that the history of law represented a march of moral progress in which the notion of being a rights-bearing person had been extended to an ever-widening class of actors, from only certain men to more men, then to some women, and, finally, to all adults and then children and even corporations and ships. Why not trees and rivers and streams? The logic had a pedigree: As early as 1873, Frederick Douglass had publicly called for a defense of nonhuman animals, a compassion born of having been treated like one and of witnessing the consequences of the cruelty instilled by living near the bottom of a presumed chain of being. “Not only the slave, but the horse, the ox, and the mule shared the general feeling of indifference to rights naturally engendered by a state of slavery,” he said. If personhood were extended to the natural world, remedies for harms against the natural world could be pursued not only by humans who were affected by those harms but on behalf of nature itself. In 1972, Stone hoped, urgently, that his article would have an effect, and rushed it to press in order to get it before Justice William O. Douglas, who in fact cited it, months later, in his dissenting opinion in Sierra Club v. Morton : “Contemporary public concern for protecting nature’s ecological equilibrium should lead to the conferral of standing upon environmental objects to sue for their own preservation.” Stone explained, half a century ago, that he was proposing this solution as the best possible remedy to address a looming catastrophe. “Scientists have been warning of the crises the earth and all humans on it face if we do not change our ways—radically,” Stone wrote in 1972. “The earth’s very atmosphere is threatened with frightening possibilities: absorption of sunlight, upon which the entire life cycle depends, may be diminished; the oceans may warm (increasing the ‘greenhouse effect’ of the atmosphere), melting the polar ice caps, and destroying our great coastal cities.” One of the greater tragedies in the history of American law is that this proposal did not meet with immediate success, as it might very well have averted our current catastrophe. But if the Constitution wasn’t amended during the early decades of the environmental and animal-rights movements, and if rights-of-nature arguments failed, American public opinion was shifting. By 1985, more than three in four Americans answered yes to the question “Do you think animals have rights?” In 1989, 80 percent agreed that “animals have rights that limit humans.” By 1992, more than half of Americans surveyed said they believed that laws protecting endangered species had not gone far enough. A 1995 survey found that two-thirds of those polled agreed with the statement that “an animal’s right to live free of suffering should be just as important as a person’s right to live free of suffering.” This shift in opinion does not appear to have changed Happy’s experience of a toys-for-tots life in the Bronx Zoo’s Elephant House. In the 1980s, the zoo held “Elephant Weekends.” “Tus, Happy and Grumpy have been rehearsing all week for the big show—a workout on the tambourine, a run-through on the waltz, some salutes and bows,” the Times reported in 1981. Larry Joyner, “their low-key, no-nonsense trainer,” a former circus trainer who started with the zoo in 1979, said, “Since elephants are extremely intelligent, they realize that when there are people in front of them, they can work slower and get by with it because I don’t yell at them as much.” (“If an elephant doesn’t obey him, Joyner whacks it mightily on its thick-skinned side with a bull hook,” the L.A. Times later reported. “Good behavior is rewarded with an apple and a pat.”) Joyner particularly noted 10-year-old Happy’s talents: “Happy is a more physical elephant than anything I’ve ever seen. Most people, when they train elephants, cats, horses or whatever, usually turn them loose and just watch them for hours. Then you can figure what trick to put on each elephant. Happy runs more, she moves more, she’s rougher. That’s why I put all the physical tricks on her: the hind-leg stand, the sit-up. Grumpy’s more intelligent. She learns well; she uses her head.” For the celebration, Happy, Grumpy, and Tus were dressed up in costumes, decorated blankets whose designer told the Times , “There is a sort of Oriental smoking jacket for Grumpy—black-and-yellow checked. Happy will have a blue-and-black polka-dotted dress that also has tassels and ‘diamonds’—they are really rhinestones—on it. It is all going to be very extravagant.” During the exhibit, the Associated Press reported, “zoo-goers will see Tus pick up a human being. Happy will do a hind leg stand. Grumpy will pick up an egg without breaking it.” The highlight of an Elephant Weekend was a tug-of-war, reported on, in 1984, by The New Yorker : “During the past four years’ history of Elephant Weekends the non-elephant team has won the tug-of-war only once—that was in 1982, when the victors were the Fordham Rams football team. On Saturday, the challengers were members of the Purchase, New York, Volunteer Fire Department. They didn’t win. Nor did the Fordham Rams on Sunday. Grumpy won.” Elephant Weekends came to an end. And the elephants at the Bronx Zoo began dying off. In 1981, Patty had a calf, Astor, named after Brooke Astor, who had helped fund the Wild Asia exhibit; the calf died less than two years later. (The infant mortality rate for elephants in American zoos is 40 percent, almost triple the rate in the wild, according to a 2012 investigation by The Seattle Times.
) Laverne died in 1982, of a salmonella infection; she was 12. Half the elephant deaths in American zoos are of animals younger than 24 years old, The Seattle Times reported, and most die “from injury or disease linked to conditions of their captivity, from chronic foot problems caused by standing on hard surfaces to musculoskeletal disorders from inactivity caused by being penned or chained for days and weeks at a time.” In 1985, shortly before Groucho was moved to the Fort Worth Zoo, Happy, Grumpy, and Tus were moved to Wild Asia. The Elephant House became a visitor center.
In the wild, Happy would have become pregnant for the first time around the age she was when she moved to Wild Asia. She would have had a calf every three or four years, until she was in her 50s, the age she is now. She would have been living with daughters and granddaughters. Instead, she has no family at all.
I n the 1980s, when the Bronx Zoo moved Happy and Grumpy from the Elephant House to Wild Asia, other zoos began to relocate their elephants, especially as the animal-rights movement grew more militant, adopting some of the same tactics as the anti-abortion organization Operation Rescue. The Central Park Zoo and Prospect Park Zoo closed their elephant exhibits. San Francisco, Detroit, Santa Barbara, and Chicago all announced the end of the exhibition of captive elephants. Circuses, including Ringling Bros., followed. In this context, when captive elephants escaped or rebelled, those stories garnered more and more heated attention. Hardly a month passed without another report, most well founded but some exaggerated (one circus sued PETA for defamation). In 1988, keepers at the San Diego Zoo beat an African elephant named Dunda with ax handles for two days, while her legs were chained. Three years later, an Asian elephant at that zoo killed her keeper. In 1992, an Asian elephant named Janet escaped from a circus in Florida and targeted two of her trainers (without harming any of the children riding on her back). The next year, a small group of circus elephants in Florida together trapped and stomped their trainer, and, in Honolulu in 1994, a 21-year-old African elephant killed her trainer in the arena and escaped. (Police shot and killed her in the street.) In 1995, two female elephants escaped a circus while it was in Pennsylvania and, months later, while it was in New York. In 2002, an elephant named Tonya escaped for the fourth time in six years, having fled a Maine wildlife park, a circus in Ohio, another in Pennsylvania, and another in South Carolina.
The Bronx Zoo adopted a protocol known as “protected contact,” which meant that Happy and Grumpy, hand-raised since infancy, no longer spent much time in the close company of people. And opposition to keeping elephants in captivity deterred the zoo from bringing in new elephants to keep the others company. Tus, who likely had been something close to a mother for Happy and Grumpy, died in May of 2002. Two months later, Patty and Maxine attacked Grumpy. Happy wasn’t with them, but she would have heard it happen. Grumpy’s injuries were so grave that in October of 2002, the zoo decided to euthanize her.
“It is hardly fair to say that Happy has a history of not getting on with other elephants,” Joyce Poole wrote in her affidavit on behalf of the Nonhuman Rights Project. In five decades at the zoo, Happy and a handful of other elephants had “been forced to share a space that, for an elephant, is equivalent to the size of a house.” And two of those elephants killed her closest companion. After that, it was impossible to house Happy with Patty and Maxine. The zoo picked a young female elephant, Sammy, to be a companion for Happy, but she died not long afterward.
In 2005, at a meeting at Disney World, the Association of Zoos and Aquariums decided to “speak and act with a unified voice”—determining to defend keeping elephants in zoos and to call critics of their captivity “extremists”—even as it set new rules for elephant care. The AZA requires that “each zoo with elephants must have a minimum of three females (or the space to have three females), two males or three elephants of mixed gender.” The Bronx Zoo is in compliance with this rule because it has room for elephants it does not have.
Happy might have been better off if she’d never left Lion Country Safari. “If we can’t keep elephants in captivity properly, we shouldn’t,” Lion Country’s Terry Wolf told me. “And we’ve proven that we can’t.” In 2006, during Wolf’s tenure, Lion Country decided to release its last elephants. That year, The New York Times reported that Bronx Zoo officials “say it would be inhumane to sustain an exhibit with a single elephant.” Happy has been alone ever since.
W hen you ride the monorail through Wild Asia, your view is quite constrained. All the cars face the same direction, and you can see only what’s in front of you. Behind the monorail lie the rest of the zoo structures, including the Elephant Barn. In 2005, Joshua Plotnik spent a very hot summer on top of that barn, watching Happy, Patty, and Maxine, by turns, inspect an elephant-size mirror.
Plotnik was in a doctoral program at Emory University when he decided to study elephants. He wanted to know, empirically, “How do we get inside the elephant’s head?” Together with Diana Reiss, now a professor of psychology at Hunter College but at the time a scientist with the Wildlife Conservation Society, Plotnik decided to see if an elephant could pass what’s known as the mirror self-recognition test. Humans can pass this test around the age of 2. Only the great apes and dolphins had been proven to pass it. Plotnik and Reiss encased a two-by-two-meter acrylic mirror in a steel frame, and the zoo helped them install it in the pen. “I remember Maxine and Patty getting really close to the mirror on the first day,” Plotnik told me. “They would get down on their knees or try to smell over the mirror to inspect behind it—it’s as if they were trying to get at that elephant in the mirror.” This is what many animals do: They consider the animal in the mirror a stranger and try to figure out how to intimidate and threaten it, or how to meet and greet it. “Very quickly when they realize they can’t touch, smell, or hear this animal, some species just stop displaying social behavior,” Plotnik said. But elephants investigate; they move one way, and then another, looking. “It’s like Harpo and Groucho in Duck Soup ,” he said. “It’s as if they’re asking, Why is that animal doing the same thing that I’m doing? ” And then an elephant like Happy decides, If there’s no other elephant there, it must be me.
Sitting on the Elephant Barn, Plotnik and Reiss were astonished. Happy, Patty, and Maxine did the most interesting things. “Next, they start inspecting themselves, inspect their mouths, look closely at parts of their body they don’t otherwise get to see. They’d grab onto their ears and pull their ears back and forth.” To prove that the elephants understood that they were looking at a reflection, Plotnik and Reiss devised a test, a modified version of a test done on chimpanzees: They painted a white X on the elephants’ foreheads and, with a glow-in-the-dark Halloween paint that’s invisible during daylight, they painted another one on the other side, as a control. Only one of the three elephants passed this task. Happy walked up to the mirror and reached her trunk up to touch the X.
I’ve wondered whether she thought, for a moment, that it was Grumpy, come back, before she realized that she was looking, instead, at herself.
Plotnik hasn’t seen Happy since then. He doesn’t believe that he knows what’s best for her, and he’s baffled that the NhRP thinks it does. The question, he said, is whether Happy wants to be with other elephants. “If we had asked this question decades ago, when Happy was first brought to a zoo, yes, I absolutely think what would have been best for Happy would have been for her to remain with her family, in the wild,” Plotnik told me. “But she has been in captivity now for 50 years, unfortunately, so it’s really difficult to know whether such a big change to her life would be in her best interest now.” The subtlety of that position has been lost on much of the press, and especially on celebrities who have taken up Happy’s cause. Elephant advocacy has long been a Hollywood hobby, from Richard Pryor and Cher to Lily Tomlin and Edward Norton. While that commitment is surely earnest, it comes at very little cost to celebrities’ lives and livelihoods. But, as Plotnik points out, #FreeHappy might well come at the expense of poor farmers in Thailand, much like the campaign to save the bison, or preserve the national parks, came at the expense of people like the Yosemite Indians.
Plotnik does most of his fieldwork in Kanchanaburi, Thailand, with wild elephants. He’s fluent in Thai and spends a great deal of time with villagers who work with elephants, and also with villagers who are very frustrated with elephants eating their crops and destroying their fields. “We need to refocus our attention on the fact that fewer than 50,000 Asian elephants remain on the planet. Countries like Thailand and Sri Lanka have a long history of coexistence between humans and elephants,” he said. “Any decisions about elephant personhood that might have a cascading effect on elephant welfare and conservation around the world ought to take into account the needs and livelihoods of the people that have existed alongside them for thousands of years. Most Westerners just have never thought about that impact.” People on either side of the legal battle about Happy tend to see each other as villains, unable to find common cause or common purpose in their growing desperation about what humans are doing to one another and to animals and to the world. Meanwhile, waters rise, coastlines erode, humans and above all the poor suffer and die, diseases spread, homes wash away, forests die, fortunes are lost, habitats disappear, species die out. In the end, elephants are just a lot better at getting along with one another than people are—unless they’re held captive, for year after year, decade after decade.
Still, even the most poorly treated elephants can thrive in sanctuaries. Sissy was born in Thailand and first exhibited at Six Flags Over Texas in 1969. She was moved four times, and, as Joyce Poole wrote in her affidavit, she “spent a decade and a half alone before being sent to the Houston Zoo, where she was labelled autistic and antisocial.” In 1997, returned to solitary confinement in Gainesville, she crushed a park supervisor to death. “She was moved again to El Paso Zoo,” Poole wrote, “where she was beaten because she was a killer elephant.” At some point, her trunk became partially paralyzed. In 2000, she wasn’t expected to live out the year. But at the start of that year, she was moved to the Elephant Sanctuary in Tennessee. Within weeks of her arrival, she was spotted lying down, something she hadn’t done in the zoo for years. She made a fast friend, Winkie. “Within six months of arrival she was calm and cooperative,” Poole wrote. “She became a leader, putting all elephants at ease.” Nearly 22 years later, Sissy is still there, living on a sanctuary of almost 3,000 acres.
T he Nonhuman Rights Project , founded in 1996, always intended to begin its litigation with Happy. In a Supreme Court opinion written in 1992, Antonin Scalia had dismissed legal arguments about people claiming to have standing to enforce the protection of animals and the natural world. The case concerned the Endangered Species Act, and the only elephants mentioned were wild elephants in Sri Lanka, but Scalia, who grew up in Queens, scoffed, “Under these theories, anyone who goes to see Asian elephants in the Bronx Zoo, and anyone who is a keeper of Asian elephants in the Bronx Zoo, has standing to sue.” Maybe that caught someone’s attention. Meanwhile, personhood claims began to look promising. In a 2004 case filed by an attorney as if working for whales and dolphins, the Ninth Circuit said that an animal “cannot function as a plaintiff” but that nothing in the Constitution “prevents Congress from authorizing a suit in the name of an animal, any more than it prevents suits brought in the name of artificial persons such as corporations, partnerships or trusts, and even ships, or of juridically incompetent persons such as infants, juveniles, and mental incompetents.” To be granted personhood, in the legal sense, something doesn’t have to be like a person, in the colloquial sense. But if it were necessary, elephants would come close. If having a conscious awareness of one’s past, present, and future is a definition of personhood, the philosopher Gary Varner argued in 2008, then “elephants might be persons—or at least near-persons.” In 2013, the NhRP formalized its decision to file a petition on behalf of Happy as its first client, but then the lawyers had an overnight change of mind. “We decided to go with chimpanzees instead,” Steve Wise told me. For one thing, they had more chimpanzee experts available. Jane Goodall is a founding member of NhRP’s board. Happy’s case would have to wait.
In 2013, the NhRP filed habeas corpus petitions for chimpanzees named Kiko and Tommy. The New York court rejected both petitions, pointing out that “habeas corpus relief has never been provided to any nonhuman entity.” In 2016, after the NhRP filed a second habeas corpus petition for Kiko, Harvard’s Laurence Tribe submitted an amicus brief , disputing the court’s claim that Kiko could not be a person on the ground that persons bear both rights and duties. The court’s definition of personhood, he argued, “would appear on its face to exclude third-trimester fetuses, children, and comatose adults (among other entities whose rights as persons the law protects).” In 2018, the New York Court of Appeals denied a motion for permission to appeal, but one judge, Eugene M. Fahey, observed that “the issue whether a nonhuman animal has a fundamental right to liberty protected by the writ of habeas corpus is profound and far-reaching. It speaks to our relationship with all the life around us. Ultimately, we will not be able to ignore it.” Meanwhile, animal personhood had been established, at least notionally, elsewhere. In 2016, a court in Argentina ruled that “a chimpanzee is not a thing,” and declared that “great apes are legal persons, with legal capacity.” In 2018, a judge in India declared “the entire animal kingdom” to be “legal entities having a distinct persona with corresponding rights, duties and liabilities of a living person.” In the U.S., other cases have been working their way through other courts. In 2018, in Oregon, the Animal Legal Defense Fund filed a suit on behalf of a horse named Justice. The judge dismissed the case for lack of standing, writing, “The problem is that there is not an adequate procedural avenue for Justice to utilize that would grant him access to the courthouse door,” and expressed concern about the “profound implications of a judicial finding that a horse, or any non-human animal for that matter, is a legal entity.” The Pepperdine law professor Richard Cupp, an ardent opponent of animal personhood, observed, about Justice the horse , “Any case that could lead to billions of animals having the potential to file lawsuits is a shocker in the biggest way. Once you say a horse or dog or cat can personally sue over being abused, it’s not too big a jump to say, ‘Well, we’re kind of establishing that they’re legal persons with that. And legal persons can’t be eaten.” This fall, the ALDF filed a request in a court in Cincinnati representing the descendants of a “Community of Hippopotamuses” once owned by Pablo Escobar as “interested persons” in a legal dispute in Colombia. “Hippos are People, Too!” ran the reports. Hippos are not people. Maybe the press isn’t quite ready for the gravity of Happy’s case.
Happy’s plight is as serious and desperate as the consequences of the court’s eventual ruling are unknown and unknowable and, quite possibly, profound. She stands and stares and lifts one foot. She swings her trunk. She sways, watching the monorail pass by, again and again and again. The New York Court of Appeals could hear the case as early as this winter. But courtroom arguments about elephant personhood have taken place before. In 2017, the Nonhuman Rights Project tried seeking habeas corpus relief for three elephants in Connecticut. In oral arguments, the judges asked Wise about the implications of elephant personhood: Judge: Does your argument extend to other forms of animals in the wild? Wise: Our argument extends to elephants.
Judge: I’m asking you, because it’s a logical question, how far this proposition goes. You’re asking a court, not a legislature, to make a radical change in the law, and I want to have you prognosticate as to where this leads.
What with one thing and another, in the wandering ways of courts, the answer never came out. But one day soon, an elephant will stand, metaphorically, at the courtroom door, a great gray emissary from the natural world, wild. She will rumble, raise her trunk, and trumpet, piercing the uncanny quiet.
" |
252 | 2,023 | "The People Cheering for Humanity’s End - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2023/01/anthropocene-anti-humanism-transhumanism-apocalypse-predictions/672230" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore The Progress Report: Why American progress has stalled, the rise of the supertall, and seeing Earth from space. Plus the end of humanity, Marjorie Taylor Greene, solving homelessness, mood swings, Cormac McCarthy, Shirley Hazzard, the return of the Old West, and more.
Why the Age of American Progress Ended Derek Thompson How Tall Is Too Tall? Bianca Bosker Seeing Earth From Space Will Change You Marina Koren Why Is Marjorie Taylor Greene Like This? Elaina Plott Calabro The People Cheering for Humanity’s End Adam Kirsch It’s High Noon in America Noah Hawley A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The People Cheering for Humanity’s End A disparate group of thinkers says we should welcome our demise.
Listen to this article 00:00 27:24 Listen to more stories on audm This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic , Monday through Friday.
Sign up for it here.
“Man is an invention of recent date. And one perhaps nearing its end.” With this declaration in The Order of Things (1966), the French philosopher Michel Foucault heralded a new way of thinking that would transform the humanities and social sciences. Foucault’s central idea was that the ways we understand ourselves as human beings aren’t timeless or natural, no matter how much we take them for granted. Rather, the modern concept of “man” was invented in the 18th century, with the emergence of new modes of thinking about biology, society, and language, and eventually it will be replaced in turn.
Explore the January/February 2023 Issue Check out more from this issue and find your next story to read.
As Foucault writes in the book’s famous last sentence, one day “man would be erased, like a face drawn in the sand at the edge of the sea.” The image is eerie, but he claimed to find it “a source of profound relief,” because it implies that human ideas and institutions aren’t fixed. They can be endlessly reconfigured, maybe even for the better. This was the liberating promise of postmodernism: The face in the sand is swept away, but someone will always come along to draw a new picture in a different style.
But the image of humanity can be redrawn only if there are human beings to do it. Even the most radical 20th-century thinkers stop short at the prospect of the actual extinction of Homo sapiens , which would mean the end of all our projects, values, and meanings. Humanity may be destined to disappear someday, but almost everyone would agree that the day should be postponed as long as possible, just as most individuals generally try to delay the inevitable end of their own life.
In recent years, however, a disparate group of thinkers has begun to challenge this core assumption. From Silicon Valley boardrooms to rural communes to academic philosophy departments, a seemingly inconceivable idea is being seriously discussed: that the end of humanity’s reign on Earth is imminent, and that we should welcome it. The revolt against humanity is still new enough to appear outlandish, but it has already spread beyond the fringes of the intellectual world, and in the coming years and decades it has the potential to transform politics and society in profound ways.
This view finds support among very different kinds of people: engineers and philosophers, political activists and would-be hermits, novelists and paleontologists. Not only do they not see themselves as a single movement, but in many cases they want nothing to do with one another. Indeed, the turn against human primacy is being driven by two ways of thinking that appear to be opposites.
The first is Anthropocene anti-humanism, inspired by revulsion at humanity’s destruction of the natural environment. The notion that we are out of tune with nature isn’t new; it has been a staple of social critique since the Industrial Revolution. More than half a century ago, Rachel Carson’s Silent Spring , an exposé on the dangers of DDT, helped inspire modern environmentalism with its warning about following “the impetuous and heedless pace of man rather than the deliberate pace of nature.” But environmentalism is a meliorist movement , aimed at ensuring the long-term well-being of humanity, along with other forms of life. Carson didn’t challenge the right of humans to use pesticides; she simply argued that “the methods employed must be such that they do not destroy us along with the insects.” In the 21st century, Anthropocene anti-humanism offers a much more radical response to a much deeper ecological crisis. It says that our self-destruction is now inevitable, and that we should welcome it as a sentence we have justly passed on ourselves. Some anti-humanist thinkers look forward to the extinction of our species, while others predict that even if some people survive the coming environmental apocalypse, civilization as a whole is doomed. Like all truly radical movements, Anthropocene anti-humanism begins not with a political program but with a philosophical idea. It is a rejection of humanity’s traditional role as Earth’s protagonist, the most important being in creation.
Transhumanism, by contrast, glorifies some of the very things that anti-humanism decries—scientific and technological progress, the supremacy of reason. But it believes that the only way forward for humanity is to create new forms of intelligent life that will no longer be Homo sapiens.
Some transhumanists believe that genetic engineering and nanotechnology will allow us to alter our brains and bodies so profoundly that we will escape human limitations such as mortality and confinement to a physical body. Others await, with hope or trepidation, the invention of artificial intelligence infinitely superior to our own. These beings will demote humanity to the rank we assign to animals—unless they decide that their goals are better served by wiping us out completely.
The anti-humanist future and the transhumanist future are opposites in most ways, except the most fundamental: They are worlds from which we have disappeared, and rightfully so. In thinking about these visions of a humanless world, it is difficult to evaluate the likelihood of them coming true. Some predictions and exhortations are so extreme that it is tempting not to take them seriously, if only as a defense mechanism.
But the revolt against humanity is a real and significant phenomenon, even if it is “just” an idea and its predictions of a future without us never come true. After all, unfulfilled prophecies have been responsible for some of the most important movements in history, from Christianity to Communism. The revolt against humanity isn’t yet a movement on that scale, and might never be, but it belongs in the same category. It is a spiritual development of the first order, a new way of making sense of the nature and purpose of human existence.
In the 2006 film Children of Men , the director, Alfonso Cuarón, takes only a few moments to establish a world without a future. The movie opens in 2027 in a London café, where a TV news report announces that the youngest person on Earth has been killed in Buenos Aires; he was 18 years old. In 2009, human beings mysteriously lost the ability to bear children, and the film depicts a society breaking down in the face of impending extinction. Moments after the news report, the café is blown up by a terrorist bomb.
The extinction scenario in the film, loosely based on a novel by the English mystery writer P. D. James, remains in the realm of science fiction—for now. But in October 2019, London actually did erupt in civil disorder when activists associated with the group Extinction Rebellion, or XR, blocked commuter trains at rush hour. At one Underground station, a protester was dragged from the roof of a train and beaten by a mob. In the following months, XR members staged smaller disruptions at the International Criminal Court in The Hague, on New York’s Wall Street , and at the South Australian State Parliament.
The group is nonviolent in principle, but it embraces aggressive tactics such as mock “die-ins” and mass arrests to shock the public into recognizing that the end of the human species isn’t just the stuff of movie nightmares. It is an imminent threat arising from anthropogenic climate change, which could render large parts of the globe uninhabitable. Roger Hallam, one of the founders of XR, uses terms such as extinction and genocide to describe the catastrophe he foresees, language that is far from unusual in today’s environmental discourse. The journalist David Wallace-Wells rendered the same verdict in The Uninhabitable Earth (2019), marshaling evidence for the argument that climate change “is not just the biggest threat human life on the planet has ever faced but a threat of an entirely different category and scale.” Since the late 1940s, humanity has lived with the knowledge that it has the power to annihilate itself at any moment through nuclear war.
Indeed, the climate anxiety of our own time can be seen as a return of apocalyptic fears that went briefly into abeyance after the end of the Cold War.
Destruction by despoliation is more radically unsettling. It means that humanity is endangered not only by our acknowledged vices, such as hatred and violence, but also by pursuing aims that we ordinarily consider good and natural: prosperity, comfort, increase of our kind. The Bible gives the negative commandment “Thou shalt not kill” as well as the positive commandment “Be fruitful and multiply,” and traditionally they have gone together. But if being fruitful and multiplying starts to be seen as itself a form of killing , because it deprives future generations and other species of irreplaceable resources, then the flourishing of humanity can no longer be seen as simply good. Instead, it becomes part of a zero-sum competition that pits the gratification of human desires against the well-being of all of nature—not just animals and plants, but soil, stones, and water.
If that’s the case, then humanity can no longer be considered a part of creation or nature, as science and religion teach in their different ways. Instead, it must be seen as an antinatural force that has usurped and abolished nature, substituting its own will for the processes that once appeared to be the immutable basis of life on Earth. This understanding of humanity’s place outside and against the natural order is summed up in the term Anthropocene , which in the past decade has become one of the most important concepts in the humanities and social sciences.
The legal scholar Jedediah Purdy offers a good definition of this paradigm shift in his book After Nature (2015): We find our fingerprints even in places that might seem utterly inaccessible to human beings—in the accumulation of plastic on the ocean floor and the thinning of the ozone layer six miles above our heads. Humanity’s domination of the planet is so extensive that evolution itself must be redefined. The survival of the fittest, the basic mechanism of natural selection, now means the survival of what is most useful to human beings.
In the Anthropocene, nature becomes a reflection of humanity for the first time. The effect is catastrophic, not only in practical terms, but spiritually. Nature has long filled for secular humanity one of the roles once played by God, as a source of radical otherness that can humble us and lift us out of ourselves. One of the first observers to understand the significance of this change was the writer and activist Bill McKibben. In The End of Nature (1989), a landmark work of environmentalist thought, McKibben warned of the melting glaciers and superstorms that are now our everyday reality. But the real subject of the book was our traditional understanding of nature as a “world entirely independent of us which was here before we arrived and which encircled and supported our human society.” This idea, McKibben wrote, was about to go extinct, “just like an animal or a plant”—or like Foucault’s “man,” erased by the tides.
Read: Human extinction isn’t that unlikely If the choice that confronts us is between a world without nature and a world without humanity, today’s most radical anti-humanist thinkers don’t hesitate to choose the latter. In his 2006 book, Better Never to Have Been , the celebrated “antinatalist” philosopher David Benatar argues that the disappearance of humanity would not deprive the universe of anything unique or valuable: “The concern that humans will not exist at some future time is either a symptom of the human arrogance … or is some misplaced sentimentalism.” Humanists, even secular ones, assume that only humans can create meaning and value in the universe. Without us, we tend to believe, all kinds of things might continue to happen on Earth, but they would be pointless—a show without an audience. For anti-humanists, however, this is just another example of the metaphysical egoism that leads us to overwhelm and destroy the planet. “What is so special about a world that contains moral agents and rational deliberators?” Benatar asks. “That humans value a world that contains beings such as themselves says more about their inappropriate sense of self-importance than it does about the world.” Rather, we should take comfort in the certainty that humans will eventually disappear: “Things will someday be the way they should be—there will be no people.” Like anti-humanists, transhumanists contemplate the prospect of humanity’s disappearance with serenity. What worries them is the possibility that it will happen too soon, before we have managed to invent our successors. As far as we know, humanity is the only intelligent species in the universe; if we go extinct, it may be game over for the mind. It’s notable that although transhumanists are enthusiastic about space exploration, they are generally skeptical about the existence of extraterrestrial intelligence , or at least about the chances of our ever encountering it. If minds do exist elsewhere in the universe, the destiny of humanity would be of less cosmic significance.
Humanity’s sole stewardship of reason is what makes transhumanists interested in “existential risk,” the danger that we will destroy ourselves before securing the future of the mind. In a 2002 paper, “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” the philosopher Nick Bostrom classifies such risks into four types, from “Bangs,” in which we are completely wiped out by climate change, nuclear war, disease, or asteroid impacts, to “Whimpers,” in which humanity survives but achieves “only a minuscule degree of what could have been achieved”—for instance, because we use up our planet’s resources too rapidly.
As for what humanity might achieve if all goes right, the philosopher Toby Ord writes in his 2020 book The Precipice that the possibilities are nearly infinite: “If we can venture out and animate the countless worlds above with life and love and thought, then … we could bring our cosmos to its full scale; make it worthy of our awe.” Animating the cosmos may sound mystical or metaphorical, but for transhumanists it has a concrete meaning, captured in the term cosmic endowment.
Just as a university can be seen as a device for transforming a monetary endowment into knowledge, so humanity’s function is to transform the cosmic endowment—all the matter and energy in the accessible universe—into “computronium,” a semi-whimsical term for any programmable, information-bearing substance.
The Israeli thinker Yuval Noah Harari refers to this idea as “Dataism,” describing it as a new religion whose “supreme value” is “data flow.” “This cosmic data-processing system would be like God,” he has written. “It will be everywhere and will control everything, and humans are destined to merge into it.” Harari is highly skeptical of Dataism, and his summary of it may sound satirical or exaggerated. In fact, it’s a quite accurate account of the ideas of the popular transhumanist author Ray Kurzweil. In his book The Singularity Is Near (2005), Kurzweil describes himself as a “patternist”—that is, “someone who views patterns of information as the fundamental reality.” Examples of information patterns include DNA, semiconductor chips, and the letters on this page, all of which configure molecules so that they become meaningful instead of random. By turning matter into information, we redeem it from entropy and nullity. Ultimately, “even the ‘dumb’ matter and mechanisms of the universe will be transformed into exquisitely sublime forms of intelligence,” Kurzweil prophesies.
Read: An interview with Nick Bostrom: We’re underestimating the risk of human extinction In his 2014 book, Superintelligence , Nick Bostrom performs some back-of-the-envelope calculations and finds that a computer using the entire cosmic endowment as computronium could perform at least 10 85 operations a second. (For comparison, as of 2020 the most powerful supercomputer, Japan’s Fugaku, could perform on the order of 10 17 operations a second.) This mathematical gloss is meant to make the project of animating the universe seem rational and measurable, but it hardly conceals the essentially religious nature of the idea. Kurzweil calls it “the ultimate destiny of the universe,” a phrase not ordinarily employed by people who profess to be scientific materialists. It resembles the ancient Hindu belief that the Atman, the individual soul, is identical to the Brahman, the world-spirit.
Ultimately, the source of all the limitations that transhumanism chafes against is embodiment itself. But transhumanists believe that we will take the first steps toward escaping our physical form sooner than most people realize. In fact, although engineering challenges remain, we have already made the key conceptual breakthroughs. By building computers out of silicon transistors, we came to understand that the brain itself is a computer made of organic tissue. Just as computers can perform all kinds of calculations and emulations by aggregating bits, so the brain generates all of our mental experiences by aggregating neurons.
If we are also able to build a brain scanner that can capture the state of every synapse at a given moment—the pattern of information that neuroscientists call the connectome, a term analogous with genome —then we can upload that pattern into a brain-emulating computer. The result will be, for all intents and purposes, a human mind. An uploaded mind won’t dwell in the same environment as we do, but that’s not necessarily a disadvantage. On the contrary, because a virtual environment is much more malleable than a physical one, an uploaded mind could have experiences and adventures we can only dream of, like living in a movie or a video game.
For transhumanists, mind-uploading fits perfectly into a “patternist” future. If the mind is a pattern of information, it doesn’t matter whether that pattern is instantiated in carbon-based neurons or silicon-based transistors; it is still authentically you. The Dutch neuroscientist Randal Koene refers to such patterns as Substrate-Independent Minds, or SIMs, and sees them as the key to immortality. “Your identity, your memories can then be embodied physically in many ways. They can also be backed up and operate robustly on fault-tolerant hardware with redundancy schemes,” he writes in the 2013 essay “Uploading to Substrate-Independent Minds.” The transhumanist holy grail is artificial general intelligence—a computer mind that can learn about any subject, rather than being confined to a narrow domain, such as chess. Even if such an AI started out in a rudimentary form, it would be able to apply itself to the problem of AI design and improve itself to think faster and deeper. Then the improved version would improve itself, and so on, exponentially. As long as it had access to more and more computing power, an artificial general intelligence could theoretically improve itself without limit, until it became more capable than all human beings put together.
This is the prospect that transhumanists refer to, with awe and anxiety, as “the singularity.” Bostrom thinks it’s quite reasonable to worry “that the world could be radically transformed and humanity deposed from its position as apex cogitator over the course of an hour or two,” before the AI’s creators realize what has happened. The most radical challenge of AI, however, is that it forces us to ask why humanity’s goals deserve to prevail. An AI takeover would certainly be bad for the human beings who are alive when it occurs, but perhaps a world dominated by nonhuman minds would be morally preferable in the end, with less cruelty and waste. Or maybe our preferences are entirely irrelevant. We might be in the position of God after he created humanity with free will, thus forfeiting the right to intervene when his creation makes mistakes.
The central difference between anti-humanists and transhumanists has to do with their ideas about meaning. Anti-humanists believe that the universe doesn’t need to include consciousness for its existence to be meaningful, while transhumanists believe the universe would be meaningless without minds to experience and understand it. But there is no requirement that those minds be human ones. In fact, AI minds might be more appreciative than we are of the wonder of creation. They might know nothing of the violence and hatred that often makes humanity loathsome to human beings themselves. Our greatest spiritual achievements might seem as crude and indecipherable to them as a coyote’s howl is to us.
Neither the sun nor death can be looked at with a steady eye, La Rochefoucauld said. The disappearance of the human race belongs in the same category. We can acknowledge that it’s bound to happen someday, but the possibility that the day might be tomorrow, or 10 years from now, is hard to contemplate.
Calls for the disappearance of humanity are hard to understand other than rhetorically. It’s natural to assume that transhumanism is just a dramatic way of drawing attention to the promise of new technology, while Anthropocene anti-humanism is really environmentalism in a hurry. Such skepticism is nourished by the way these schools of thought rely on unverifiable predictions.
But the accuracy of a prophecy is one thing; its significance is another. In the Gospel of Matthew, Jesus tells his followers that the world is going to end in their lifetime: “Verily I say to you, there are some standing here who shall not taste death till they see the Son of Man coming in His kingdom.” This proved not to be true—at least not in any straightforward sense—but the promise still changed the world.
The apocalyptic predictions of today’s transhumanist and anti-humanist thinkers are of a very different nature, but they too may be highly significant even if they don’t come to pass. Profound civilizational changes begin with a revolution in how people think about themselves and their destiny. The revolt against humanity has the potential to be such a beginning, with unpredictable consequences for politics, economics, technology, and culture.
The revolt against humanity has a great future ahead of it because it appeals to people who are at once committed to science and reason yet yearn for the clarity and purpose of an absolute moral imperative. It says that we can move the planet, maybe even the universe, in the direction of the good, on one condition—that we forfeit our own existence as a species.
In this way, the question of why humanity exists is given a convincing yet wholly immanent answer. Following the logic of sacrifice, we give our life meaning by giving it up.
Anthropocene anti-humanism and transhumanism share this premise, despite their contrasting visions of the post-human future. The former longs for a return to the natural equilibrium that existed on Earth before humans came along to disrupt it with our technological rapacity. The latter dreams of pushing forward, using technology to achieve a complete abolition of nature and its limitations. One sees reason as the serpent that got humanity expelled from Eden, while the other sees it as the only road back to Eden.
But both call for drastic forms of human self-limitation—whether that means the destruction of civilization, the renunciation of child-bearing , or the replacement of human beings by machines. These sacrifices are ways of expressing high ethical ambitions that find no scope in our ordinary, hedonistic lives: compassion for suffering nature, hope for cosmic dominion, love of knowledge. This essential similarity between anti-humanists and transhumanists means that they may often find themselves on the same side in the political and social struggles to come.
This article was adapted from Adam Kirsch’s book The Revolt Against Humanity.
It appears in the January/February 2023 print edition with the headline “The End of Us.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
" |
253 | 2,022 | "How Donald Trump Could Subvert the 2024 Election - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2022/01/january-6-insurrection-trump-coup-2024-election/620843" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore Trump’s next coup, the myth of voter fraud, Peter Meijer’s lonely stand, and what happened to American conservatism. Plus moral panic, Johnny Cash, U.S. money laundering, Milton, civil-war prophecies, Hanya Yanagihara’s latest, and more.
Trump’s Next Coup Has Already Begun Barton Gellman A Party, and Nation, in Crisis Jeffrey Goldberg What the GOP Does to Its Own Dissenters Tim Alberta The Great (Fake) Child-Sex-Trafficking Epidemic Kaitlyn Tiffany When the Myth of Voter Fraud Comes for You Vann R. Newkirk II What Happened to American Conservatism? David Brooks A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
Trump’s Next Coup Has Already Begun January 6 was practice. Donald Trump’s GOP is much better positioned to subvert the next election.
Listen to this article 00:00 1:36:45 Listen to more stories on audm Updated at 3:21 p.m. ET on December 9, 2021.
Technically, the next attempt to overthrow a national election may not qualify as a coup. It will rely on subversion more than violence, although each will have its place. If the plot succeeds, the ballots cast by American voters will not decide the presidency in 2024. Thousands of votes will be thrown away, or millions, to produce the required effect. The winner will be declared the loser. The loser will be certified president-elect.
Explore the January/February 2022 Issue Check out more from this issue and find your next story to read.
The prospect of this democratic collapse is not remote. People with the motive to make it happen are manufacturing the means. Given the opportunity, they will act. They are acting already.
Who or what will safeguard our constitutional order is not apparent today. It is not even apparent who will try. Democrats, big and small D , are not behaving as if they believe the threat is real. Some of them, including President Joe Biden, have taken passing rhetorical notice, but their attention wanders. They are making a grievous mistake.
“The democratic emergency is already here,” Richard L. Hasen, a professor of law and political science at UC Irvine, told me in late October. Hasen prides himself on a judicious temperament. Only a year ago he was cautioning me against hyperbole. Now he speaks matter-of-factly about the death of our body politic. “We face a serious risk that American democracy as we know it will come to an end in 2024,” he said, “but urgent action is not happening.” For more than a year now, with tacit and explicit support from their party’s national leaders, state Republican operatives have been building an apparatus of election theft. Elected officials in Arizona, Texas, Georgia, Pennsylvania, Wisconsin, Michigan, and other states have studied Donald Trump’s crusade to overturn the 2020 election. They have noted the points of failure and have taken concrete steps to avoid failure next time. Some of them have rewritten statutes to seize partisan control of decisions about which ballots to count and which to discard, which results to certify and which to reject. They are driving out or stripping power from election officials who refused to go along with the plot last November, aiming to replace them with exponents of the Big Lie. They are fine-tuning a legal argument that purports to allow state legislators to override the choice of the voters.
By way of foundation for all the rest, Trump and his party have convinced a dauntingly large number of Americans that the essential workings of democracy are corrupt, that made-up claims of fraud are true, that only cheating can thwart their victory at the polls, that tyranny has usurped their government, and that violence is a legitimate response.
Any Republican might benefit from these machinations, but let’s not pretend there’s any suspense. Unless biology intercedes, Donald Trump will seek and win the Republican nomination for president in 2024. The party is in his thrall. No opponent can break it and few will try. Neither will a setback outside politics—indictment, say, or a disastrous turn in business—prevent Trump from running. If anything, it will redouble his will to power.
The Big Story: Join Barton Gellman, along with staff writer Anne Applebaum and Atlantic editor in chief Jeffrey Goldberg, for a live virtual conversation about the threats to American democracy on December 13.
As we near the anniversary of January 6, investigators are still unearthing the roots of the insurrection that sacked the Capitol and sent members of Congress fleeing for their lives. What we know already, and could not have known then, is that the chaos wrought on that day was integral to a coherent plan. In retrospect, the insurrection takes on the aspect of rehearsal.
Even in defeat, Trump has gained strength for a second attempt to seize office, should he need to, after the polls close on November 5, 2024. It may appear otherwise—after all, he no longer commands the executive branch, which he tried and mostly failed to enlist in his first coup attempt. Yet the balance of power is shifting his way in arenas that matter more.
Trump is successfully shaping the narrative of the insurrection in the only political ecosystem that matters to him. The immediate shock of the event, which briefly led some senior Republicans to break with him, has given way to a near-unanimous embrace. Virtually no one a year ago, certainly not I , predicted that Trump could compel the whole party’s genuflection to the Big Lie and the recasting of insurgents as martyrs. Today the few GOP dissenters are being cast out. “ 2 down, 8 to go! ” Trump gloated at the retirement announcement of Representative Adam Kinzinger, one of 10 House Republicans to vote for his second impeachment.
From the November 2020 issue: Barton Gellman on the election that could break America Trump has reconquered his party by setting its base on fire. Tens of millions of Americans perceive their world through black clouds of his smoke. His deepest source of strength is the bitter grievance of Republican voters that they lost the White House, and are losing their country, to alien forces with no legitimate claim to power. This is not some transient or loosely committed population. Trump has built the first American mass political movement in the past century that is ready to fight by any means necessary, including bloodshed, for its cause.
Listen to an interview with William J. Walker, sergeant-at-arms of the U.S. House of Representatives, on The Experiment.
Listen and subscribe: Apple Podcasts | Spotify | Stitcher | Google Podcasts At the edge of the Capitol grounds, just west of the reflecting pool, a striking figure stands in spit-shined shoes and a 10-button uniform coat. He is 6 foot 4, 61 years old, with chiseled good looks and an aura of command that is undimmed by retirement. Once, according to the silver bars on his collar, he held the rank of captain in the New York Fire Department. He is not supposed to wear the old uniform at political events, but he pays that rule no mind today. The uniform tells the world that he is a man of substance, a man who has saved lives and held authority. Richard C. Patterson needs every shred of that authority for this occasion. He has come to speak on behalf of an urgent cause. “Pelosi’s political prisoners,” he tells me, have been unjustly jailed.
Patterson is talking about the men and women held on criminal charges after invading the Capitol on January 6. He does not at all approve of the word insurrection.
“It wasn’t an insurrection,” he says at a September 18 rally called “Justice for January 6.” “None of our countrymen and -women who are currently being held are charged with insurrection. They’re charged with misdemeanor charges.” Patterson is misinformed on that latter point. Of the more than 600 defendants, 78 are in custody when we speak. Most of those awaiting trial in jail are charged with serious crimes such as assault on a police officer, violence with a deadly weapon, conspiracy, or unlawful possession of firearms or explosives. Jeffrey McKellop of Virginia, for instance, is alleged to have hurled a flagpole like a spear into an officer’s face. (McKellop has pleaded not guilty.) Patterson was not in Washington on January 6, but he is fluent in the revisionist narratives spread by fabulists and trolls on social media. He knows those stories verse by verse, the ones about January 6 and the ones about the election rigged against Trump. His convictions are worth examining because he and the millions of Americans who think as he does are the primary source of Trump’s power to corrupt the next election. With a sufficient dose of truth serum, most Republican politicians would likely confess that Biden won in 2020, but the great mass of lumpen Trumpers, who believe the Big Lie with unshakable force, oblige them to pretend otherwise. Like so many others, Patterson is doing his best to parse a torrential flow of political information, and he is failing. His failures leave him, nearly always, with the worldview expounded by Trump.
We fall into a long conversation in the sweltering heat, then continue it for weeks by phone and email. I want to plumb the depths of his beliefs, and understand what lies behind his commitment to them. He is prepared to grant me the status of “fellow truth-seeker.” “The ‘Stop the Steal’ rally for election integrity was peaceful,” he says. “I think the big takeaway is when Old Glory made its way into the Rotunda on January 6, our fearless public officials dove for cover at the sight of the American flag.” What about the violence? The crowds battling police? “The police were seen on video in uniform allowing people past the bicycle-rack barricades and into the building,” he replies. “I mean, that’s established. The unarmed crowd did not overpower the officers in body armor. That doesn’t happen. They were allowed in.” Surely he has seen other video, though. Shaky, handheld footage, taken by the rioters themselves, of police officers falling under blows from a baseball bat, a hockey stick, a fire extinguisher, a length of pipe. A crowd crushing Officer Daniel Hodges in a doorway, shouting “Heave! Ho!” Does Patterson know that January 6 was among the worst days for law-enforcement casualties since September 11, 2001? That at least 151 officers from the Capitol Police and the Metropolitan Police Department suffered injuries , including broken bones, concussions, chemical burns, and a Taser-induced heart attack? Patterson has not heard these things. Abruptly, he shifts gears. Maybe there was violence, but the patriots were not to blame.
“There were people there deliberately to make it look worse than what it was,” he explains. “A handful of ill-behaved, potentially, possibly agents provocateur.” He repeats the phrase: “Agents provocateur, I have on information, were in the crowd … They were there for nefarious means. Doing the bidding of whom? I have no idea.” “‘On information’?” I ask. What information? “You can look up this name,” he says. “Retired three-star Air Force General McInerney. You got to find him on Rumble. They took him off YouTube.” Sure enough, there on Rumble (and still on YouTube) I find a video of Lieutenant General Thomas G. McInerney, 84, three decades gone from the Air Force.
His story takes a long time to tell, because the plot includes an Italian satellite and Pakistan’s intelligence service and former FBI Director James Comey selling secret U.S. cyberweapons to China. Eventually it emerges that “Special Forces mixed with antifa” combined to invade the seat of Congress on January 6 and then blame the invasion on Trump supporters, with the collusion of Senators Chuck Schumer and Mitch McConnell, along with House Speaker Nancy Pelosi.
In a further wrinkle, Pelosi, by McInerney’s account, became “frantic” soon afterward when she discovered that her own false-flag operation had captured a laptop filled with evidence of her treason. McInerney had just come from the White House, he says in his monologue, recorded two days after the Capitol riot. Trump was about to release the Pelosi evidence. McInerney had seen the laptop with his own eyes.
It shook me that Patterson took this video for proof. If my house had caught fire 10 years before, my life might have depended on his discernment and clarity of thought. He was an Eagle Scout. He earned a college degree. He keeps current on the news. And yet he has wandered off from the empirical world, placing his faith in fantastic tales that lack any basis in fact or explicable logic.
McInerney’s tale had spread widely on Facebook, Twitter, Parler, and propaganda sites like We Love Trump and InfoWars. It joined the January 6 denialist canon and lodged firmly in Patterson’s head. I reached the general by phone and asked about evidence for his claims. He mentioned a source, whose name he couldn’t reveal, who had heard some people saying “We are playing antifa today.” McInerney believed they were special operators because “they looked like SOF people.” He believed that one of them had Pelosi’s laptop, because his source had seen something bulky and square under the suspect’s raincoat. He conceded that even if it was a laptop, he couldn’t know whose it was or what was on it. For most of his story, McInerney did not even claim to have proof. He was putting two and two together. It stood to reason. In truth, prosecutors had caught and charged a neo-Nazi sympathizer who had videotaped herself taking the laptop from Pelosi’s office and bragged about it on Discord. She was a home health aide, not a special operator. (As of this writing, she has not yet entered a plea.) The general’s son, Thomas G. McInerney Jr., a technology investor, learned that I had been talking with his father and asked for a private word with me. He was torn between conflicting obligations of filial loyalty, and took a while to figure out what he wanted to say.
“He has a distinguished service record,” he told me after an otherwise off-the-record conversation. “He wants what’s best for the nation and he speaks with a sense of authority, but I have concerns at his age that his judgment is impaired. The older he’s gotten, the stranger things have gotten in terms of what he’s saying.” I tell all of this and more to Patterson. McInerney, the Military Times reported, “went off the rails” after a successful Air Force career. For a while during the Obama years he was a prominent birther and appeared a lot on Fox News, before being fired as a Fox commentator in 2018 for making a baseless claim about John McCain. Last November, he told the WVW Broadcast Network that the CIA operated a computer-server farm in Germany that had helped rig the presidential vote for Biden, and that five Special Forces soldiers had just died trying to seize the evidence. The Army and U.S. Special Operations Command put out dutiful statements that no such mission and no such casualties had taken place.
Of course, Patterson wrote to me sarcastically, “governments would NEVER lie to their OWN citizens.” He did not trust the Pentagon’s denials. There are seldom words or time enough to lay a conspiracy theory to rest. Each rebuttal is met with a fresh round of delusions.
Patterson is admirably eager for a civil exchange of views. He portrays himself as a man who “may be wrong, and if I am I admit it,” and he does indeed concede on small points. But a deep rage seems to fuel his convictions. I asked him the first time we met if we could talk “about what’s happening in the country, not the election itself.” His smile faded. His voice rose.
“There ain’t no fucking way we are letting go of 3 November 2020,” he said. “That is not going to fucking happen. That’s not happening. This motherfucker was stolen. The world knows this bumbling, senile, career corrupt fuck squatting in our White House did not get 81 million votes.” He had many proofs. All he really needed, though, was arithmetic. “The record indicates 141 [million] of us were registered to vote and cast a ballot on November 3,” he said. “Trump is credited with 74 million votes out of 141 million. That leaves 67 million for Joe; that doesn’t leave any more than that. Where do these 14 million votes come from?” Patterson did not recall where he had heard those figures. He did not think he had read Gateway Pundit, which was the first site to advance the garbled statistics. Possibly he saw Trump amplify the claim on Twitter or television, or some other stop along the story’s cascading route across the right-wing mediaverse.
Reuters did a good job debunking the phony math , which got the total number of voters wrong.
I was interested in something else: the worldview that guided Patterson through the statistics. It appeared to him (incorrectly) that not enough votes had been cast to account for the official results. Patterson assumed that only fraud could explain the discrepancy, that all of Trump’s votes were valid, and that the invalid votes must therefore belong to Biden.
“Why don’t you say Joe Biden got 81 million and there’s only 60 million left for Trump?” I asked.
Patterson was astonished.
“It’s not disputed, the 74 million vote count that was credited to President Trump’s reelection effort,” he replied, baffled at my ignorance. “It’s not in dispute … Have you heard that President Trump engaged in cheating and fraudulent practices and crooked machines?” Biden was the one accused of rigging the vote. Everybody said so. And for reasons unspoken, Patterson wanted to be carried away by that story.
Robert A. Pape, a well-credentialed connoisseur of political violence, watched the mob attack the Capitol on a television at home on January 6. A name came unbidden to his mind: Slobodan Milošević.
Back in June 1989, Pape had been a postdoctoral fellow in political science when the late president of Serbia delivered a notorious speech. Milošević compared Muslims in the former Yugoslavia to Ottomans who had enslaved the Serbs six centuries before. He fomented years of genocidal war that destroyed the hope for a multiethnic democracy, casting Serbs as defenders against a Muslim onslaught on “European culture, religion, and European society in general.” By the time Trump unleashed the angry crowd on Congress, Pape, who is 61, had become a leading scholar on the intersection of warfare and politics. He saw an essential similarity between Milošević and Trump—one that suggested disturbing hypotheses about Trump’s most fervent supporters. Pape, who directs the University of Chicago Project on Security and Threats, or CPOST, called a staff meeting two days after the Capitol attack. “I talked to my research team and told them we were going to reorient everything we were doing,” he told me.
Milošević, Pape said, inspired bloodshed by appealing to fears that Serbs were losing their dominant place to upstart minorities. “What he is arguing” in the 1989 speech “is that Muslims in Kosovo and generally throughout the former Yugoslavia are essentially waging genocide on the Serbs,” Pape said. “And really, he doesn’t use the word replaced.
But this is what the modern term would be.” Pape was alluding to a theory called the “Great Replacement.” The term itself has its origins in Europe. But the theory is the latest incarnation of a racist trope that dates back to Reconstruction in the United States. Replacement ideology holds that a hidden hand (often imagined as Jewish) is encouraging the invasion of nonwhite immigrants, and the rise of nonwhite citizens, to take power from white Christian people of European stock. When white supremacists marched with torches in Charlottesville, Virginia, in 2017, they chanted, “Jews will not replace us!” Trump borrowed periodically from the rhetorical canon of replacement. His remarks on January 6 were more disciplined than usual for a president who typically spoke in tangents and unfinished thoughts. Pape shared with me an analysis he had made of the text that Trump read from his prompter.
“Our country has been under siege for a long time, far longer than this four-year period,” Trump told the crowd. “You’re the real people. You’re the people that built this nation.” He famously added, “And we fight. We fight like hell. And if you don’t fight like hell, you’re not going to have a country anymore.” Just like Milošević, Trump had skillfully deployed three classic themes of mobilization to violence, Pape wrote: “The survival of a way of life is at stake. The fate of the nation is being determined now. Only genuine brave patriots can save the country.” Watching how the Great Replacement message was resonating with Trump supporters, Pape and his colleagues suspected that the bloodshed on January 6 might augur something more than an aberrant moment in American politics. The prevailing framework for analyzing extremist violence in the U.S., they thought, might not be adequate to explain what was happening.
When the Biden administration published a new homeland-security strategy in June, it described the assault on the Capitol as a product of “domestic violent extremists,” and invoked an intelligence assessment that said attacks by such extremists come primarily from lone wolves or small cells. Pape and his colleagues doubted that this captured what had happened on January 6. They set about seeking systematic answers to two basic questions: Who were the insurgents, in demographic terms? And what political beliefs animated them and their sympathizers? Pape’s three-bedroom house, half an hour’s drive south of Chicago, became the pandemic headquarters of a virtual group of seven research professionals, supported by two dozen University of Chicago undergraduates. The CPOST researchers gathered court documents, public records, and news reports to compile a group profile of the insurgents.
“The thing that got our attention first was the age,” Pape said. He had been studying violent political extremists in the United States, Europe, and the Middle East for decades. Consistently, around the world, they tended to be in their 20s and early 30s. Among the January 6 insurgents, the median age was 41.8. That was wildly atypical.
Then there were economic anomalies. Over the previous decade, one in four violent extremists arrested by the FBI had been unemployed. But only 7 percent of the January 6 insurgents were jobless, and more than half of the group had a white-collar job or owned their own business. There were doctors, architects, a Google field-operations specialist, the CEO of a marketing firm, a State Department official. “The last time America saw middle-class whites involved in violence was the expansion of the second KKK in the 1920s,” Pape told me.
Yet these insurgents were not, by and large, affiliated with known extremist groups. Several dozen did have connections with the Proud Boys, the Oath Keepers, or the Three Percenters militia, but a larger number—six out of every seven who were charged with crimes—had no ties like that at all.
Kathleen Belew, a University of Chicago historian and co-editor of A Field Guide to White Supremacy , says it is no surprise that extremist groups were in the minority. “January 6 wasn’t designed as a mass-casualty attack, but rather as a recruitment action” aimed at mobilizing the general population, she told me. “For radicalized Trump supporters … I think it was a protest event that became something bigger.” Pape’s team mapped the insurgents by home county and ran statistical analyses looking for patterns that might help explain their behavior. The findings were counterintuitive. Counties won by Trump in the 2020 election were less likely than counties won by Biden to send an insurrectionist to the Capitol. The higher Trump’s share of votes in a county, in fact, the lower the probability that insurgents lived there. Why would that be? Likewise, the more rural the county, the fewer the insurgents. The researchers tried a hypothesis: Insurgents might be more likely to come from counties where white household income was dropping. Not so. Household income made no difference at all.
Only one meaningful correlation emerged. Other things being equal, insurgents were much more likely to come from a county where the white share of the population was in decline. For every one-point drop in a county’s percentage of non-Hispanic whites from 2015 to 2019, the likelihood of an insurgent hailing from that county increased by 25 percent. This was a strong link, and it held up in every state.
Trump and some of his most vocal allies, Tucker Carlson of Fox News notably among them, had taught supporters to fear that Black and brown people were coming to replace them. According to the latest census projections, white Americans will become a minority, nationally, in 2045. The insurgents could see their majority status slipping before their eyes.
The CPOST team decided to run a national opinion survey in March, based on themes it had gleaned from the social-media posts of insurgents and the statements they’d made to the FBI under questioning. The researchers first looked to identify people who said they “don’t trust the election results” and were prepared to join a protest “even if I thought the protest might turn violent.” The survey found that 4 percent of Americans agreed with both statements, a relatively small fraction that nonetheless corresponds to 10 million American adults.
In June, the researchers sharpened the questions. This brought another surprise. In the new poll, they looked for people who not only distrusted the election results but agreed with the stark assertion that “the 2020 election was stolen from Donald Trump and Joe Biden is an illegitimate president.” And instead of asking whether survey subjects would join a protest that “might” turn violent, they looked for people who affirmed that “the use of force is justified to restore Donald Trump to the presidency.” Pollsters ordinarily expect survey respondents to give less support to more transgressive language. “The more you asked pointed questions about violence, the more you should be getting ‘social-desirability bias,’ where people are just more reluctant,” Pape told me.
Here, the opposite happened: the more extreme the sentiments, the greater the number of respondents who endorsed them. In the June results, just over 8 percent agreed that Biden was illegitimate and that violence was justified to restore Trump to the White House. That corresponds to 21 million American adults. Pape called them “committed insurrectionists.” (An unrelated Public Religion Research Institute survey on November 1 found that an even larger proportion of Americans, 12 percent, believed both that the election had been stolen from Trump and that “true American patriots may have to resort to violence in order to save our country.”) s Why such a large increase? Pape believed that Trump supporters simply preferred the harsher language, but “we cannot rule out that attitudes hardened” between the first and second surveys. Either interpretation is troubling. The latter, Pape said, “would be even more concerning since over time we would normally think passions would cool.” In the CPOST polls, only one other statement won overwhelming support among the 21 million committed insurrectionists. Almost two-thirds of them agreed that “African American people or Hispanic people in our country will eventually have more rights than whites.” Slicing the data another way: Respondents who believed in the Great Replacement theory, regardless of their views on anything else, were nearly four times as likely as those who did not to support the violent removal of the president.
The committed insurrectionists, Pape judged, were genuinely dangerous. There were not many militia members among them, but more than one in four said the country needed groups like the Oath Keepers and Proud Boys. One-third of them owned guns, and 15 percent had served in the military. All had easy access to the organizing power of the internet.
What Pape was seeing in these results did not fit the government model of lone wolves and small groups of extremists. “This really is a new, politically violent mass movement,” he told me. “This is collective political violence.” Pape drew an analogy to Northern Ireland in the late 1960s, at the dawn of the Troubles. “In 1968, 13 percent of Catholics in Northern Ireland said that the use of force for Irish nationalism was justified,” he said. “The Provisional IRA was created shortly thereafter with only a few hundred members.” Decades of bloody violence followed. And 13 percent support was more than enough, in those early years, to sustain it.
“It’s the community’s support that is creating a mantle of legitimacy—a mandate, if you would, that justifies the violence” of a smaller, more committed group, Pape said. “I’m very concerned it could happen again, because what we’re seeing in our surveys … is 21 million people in the United States who are essentially a mass of kindling or a mass of dry wood that, if married to a spark, could in fact ignite.” The story of Richard Patterson, once you delve into it, is consonant with Pape’s research. Trump appealed to him as an “in-your-face, brash ‘America First’ guy who has the interest of ‘We the People.’ ” But there was more. Decades of personal and political grudges infuse Patterson’s understanding of what counts as “America” and who counts as “we.” Where Patterson lives, in the Bronx, there were 20,413 fewer non-Hispanic white people in the 2020 census than in 2010. The borough had reconfigured from 11 percent white to 9 percent.
Patterson came from Northern Irish stock and grew up in coastal Northern California. He was a “lifetime C student” who found ambition at age 14 when he began to hang around at a local fire station. As soon as he finished high school he took the test to join the Oakland fire department, earning, he said, outstanding scores.
“But in those days,” he recalled, “Oakland was just beginning to diversify and hire females. So no job for the big white kid.” The position went to “this little woman … who I know failed the test.” Patterson tried again in San Francisco, but found the department operating under a consent decree. Women and people of color, long excluded, had to be accepted in the incoming cohort. “So, again, the big white kid is told, ‘Fuck you, we got a whole fire department of guys that look just like you. We want the department to look different because diversity is all about an optic.’ ” The department could hire “the Black applicant instead of myself.” Patterson bought a one-way ticket to New York, earned a bachelor’s degree in fire science, and won an offer to join New York’s Bravest. But desegregation had come to New York, too, and Patterson found himself seething.
In 1982, a plaintiff named Brenda Berkman had won a lawsuit that opened the door to women in the FDNY. A few years later, the department scheduled training sessions “to assist male firefighters in coming to terms with the assimilation of females into their ranks.” Patterson’s session did not go well. He was suspended without pay for 10 days after a judge found that he had called the trainer a scumbag and a Communist and chased him out of the room, yelling, “Why don’t you fuck Brenda Berkman and I hope you both die of AIDS.” The judge found that the trainer had “reasonably feared for his safety.” Patterson continues to maintain his innocence.
Later, as a lieutenant, Patterson came across a line on a routine form that asked for his gender and ethnicity. He resented that. “There was no box for ‘Fuck off,’ so I wrote in ‘Fuck off,’ ” he said. “So they jammed me up for that”—this time a 30-day suspension without pay.
Even while Patterson rose through the ranks, he kept on finding examples of how the world was stacked against people like him. “I look at the 2020 election as sort of an example on steroids of affirmative action. The straight white guy won, but it was stolen from him and given to somebody else.” Wait. Wasn’t this a contest between two straight white guys? Not really, Patterson said, pointing to Vice President Kamala Harris: “Everybody touts the gal behind the president, who is currently, I think, illegitimately in our White House. It is, quote, a woman of color, like this is some—like this is supposed to mean something.” And do not forget, he added, that Biden said, “If you have a problem figuring out whether you’re for me or Trump, then you ain’t Black.” What to do about all this injustice? Patterson did not want to say, but he alluded to an answer: “Constitutionally, the head of the executive branch can’t tell an American citizen what the fuck to do. Constitutionally, all the power rests with the people. That’s you and me, bro. And Mao is right that all the power emanates from the barrel of a gun.” Did he own a gun himself? “My Second Amendment rights, like my medical history, are my own business,” he replied.
Many of Patterson’s fellow travelers at the “Justice for January 6” protest were more direct about their intentions. One of them was a middle-aged man who gave his name as Phil. The former Coast Guard rescue diver from Kentucky had joined the crowd at the Capitol on January 6 but said he has not heard from law enforcement. Civil war is coming, he told me, and “I would fight for my country.” Was he speaking metaphorically? “No, I’m not,” he said. “Oh Lord, I think we’re heading for it. I don’t think it’ll stop. I truly believe it. I believe the criminals—Nancy Pelosi and her criminal cabal up there—is forcing a civil war. They’re forcing the people who love the Constitution, who will give their lives to defend the Constitution—the Democrats are forcing them to take up arms against them, and God help us all.” Gregory Dooner, who was selling flags at the protest, said he had been just outside the Capitol on January 6 as well. He used to sell ads for AT&T Advertising Solutions, and now, in retirement, he peddles MAGA gear: $10 for a small flag, $20 for a big one.
Violent political conflict, he told me, was inevitable, because Trump’s opponents “want actual war here in America. That’s what they want.” He added a slogan of the Three Percenters militia: “When tyranny becomes law, rebellion becomes duty.” The Declaration of Independence, which said something like that, was talking about King George III. If taken seriously today, the slogan calls for a war of liberation against the U.S. government.
“Yo, hey—hey,” Dooner called out to a customer who had just unfurled one of his banners. “I want to read him the flag.” He recited the words inscribed on the Stars and Stripes: “A free people ought not only to be armed and disciplined but they should have sufficient arms and ammunition to maintain a status of independence from any who might attempt to abuse them, which would include their own government.” “George Washington wrote that,” he said. “That’s where we’re at, gentlemen.” I looked it up.
George Washington did not write anything like that.
The flag was Dooner’s best seller, even so.
Over the course of Trump’s presidency, one of the running debates about the man boiled down to: menace or clown? Threat to the republic, or authoritarian wannabe who had no real chance of breaking democracy’s restraints? Many observers rejected the dichotomy—the essayist Andrew Sullivan, for instance, described the former president as “ both farcical and deeply dangerous.
” But during the interregnum between November 3 and Inauguration Day, the political consensus leaned at first toward farce. Biden had won. Trump was breaking every norm by refusing to concede, but his made-up claims of fraud were getting him nowhere.
In a column headlined “ There Will Be No Trump Coup ,” the New York Times writer Ross Douthat had predicted, shortly before Election Day, that “any attempt to cling to power illegitimately will be a theater of the absurd.” He was responding in part to my warning in these pages that Trump could wreak great harm in such an attempt.
The Ticket podcast: Barton Gellman on how Trump could tamper with the 2020 vote One year later, Douthat looked back. In scores of lawsuits, “a variety of conservative lawyers delivered laughable arguments to skeptical judges and were ultimately swatted down,” he wrote, and state election officials warded off Trump’s corrupt demands. My own article, Douthat wrote, had anticipated what Trump tried to do. “But at every level he was rebuffed, often embarrassingly, and by the end his plotting consisted of listening to charlatans and cranks proposing last-ditch ideas” that could never succeed.
Douthat also looked ahead, with guarded optimism, to the coming presidential election. There are risks of foul play, he wrote, but “Trump in 2024 will have none of the presidential powers, legal and practical, that he enjoyed in 2020 but failed to use effectively in any shape or form.” And “you can’t assess Trump’s potential to overturn an election from outside the Oval Office unless you acknowledge his inability to effectively employ the powers of that office when he had them.” That, I submit respectfully, is a profound misunderstanding of what mattered in the coup attempt a year ago. It is also a dangerous underestimate of the threat in 2024—which is larger, not smaller, than it was in 2020.
It is true that Trump tried and failed to wield his authority as commander in chief and chief law-enforcement officer on behalf of the Big Lie. But Trump did not need the instruments of office to sabotage the electoral machinery. It was citizen Trump—as litigant, as candidate, as dominant party leader, as gifted demagogue, and as commander of a vast propaganda army—who launched the insurrection and brought the peaceful transfer of power to the brink of failure.
All of these roles are still Trump’s for the taking. In nearly every battle space of the war to control the count of the next election—statehouses, state election authorities, courthouses, Congress, and the Republican Party apparatus—Trump’s position has improved since a year ago.
To understand the threat today, you have to see with clear eyes what happened, what is still happening, after the 2020 election. The charlatans and cranks who filed lawsuits and led public spectacles on Trump’s behalf were sideshows. They distracted from the main event: a systematic effort to nullify the election results and then reverse them. As milestones passed—individual certification by states, the meeting of the Electoral College on December 14—Trump’s hand grew weaker. But he played it strategically throughout. The more we learn about January 6, the clearer the conclusion becomes that it was the last gambit in a soundly conceived campaign—one that provides a blueprint for 2024.
The strategic objective of nearly every move by the Trump team after the networks called the election for Joe Biden on November 7 was to induce Republican legislatures in states that Biden won to seize control of the results and appoint Trump electors instead. Every other objective—in courtrooms, on state election panels, in the Justice Department, and in the office of the vice president—was instrumental to that end.
Electors are the currency in a presidential contest and, under the Constitution, state legislators control the rules for choosing them.
Article II provides that each state shall appoint electors “in such Manner as the Legislature thereof may direct.” Since the 19th century, every state has ceded the choice to its voters, automatically certifying electors who support the victor at the polls, but in Bush v. Gore the Supreme Court affirmed that a state “can take back the power to appoint electors.” No court has ever said that a state could do that after its citizens have already voted, but that was the heart of Trump’s plan.
Every path to stealing the election required GOP legislatures in at least three states to repudiate the election results and substitute presidential electors for Trump. That act alone would not have ensured Trump’s victory. Congress would have had to accept the substitute electors when it counted the votes, and the Supreme Court might have had a say. But without the state legislatures, Trump had no way to overturn the verdict of the voters.
Trump needed 38 electors to reverse Biden’s victory, or 37 for a tie that would throw the contest to the House of Representatives. For all his improvisation and flailing in the postelection period, Trump never lost sight of that goal. He and his team focused on obtaining the required sum from among the 79 electoral votes in Arizona (11), Georgia (16), Michigan (16), Nevada (6), Pennsylvania (20), and Wisconsin (10).
Trump had many tactical setbacks. He and his advocates lost 64 of 65 challenges to election results in court, and many of them were indeed comically inept. His intimidation of state officials, though it also failed in the end, was less comical. Trump was too late, barely, to strong-arm Republican county authorities into rejecting Detroit’s election tally ( they tried and failed to rescind their “yes” votes after the fact ), and Aaron Van Langevelde, the crucial Republican vote on Michigan’s Board of State Canvassers, stood up to Trump’s pressure to block certification of the statewide results. Georgia Secretary of State Brad Raffensperger refused the president’s request to “find” 11,780 votes for Trump after two recounts confirming Biden’s win. Two Republican governors, in Georgia and Arizona, signed certificates of Biden’s victory; the latter did so even as a telephone call from Trump rang unanswered in his pocket.
The acting attorney general stared down Trump’s plan to replace him with a subordinate, Jeffrey B. Clark, who was prepared to send a letter advising the Georgia House and Senate to reconsider their state’s election results.
Read: How close did the U.S. come to a successful coup? Had Trump succeeded in any of these efforts, he would have given Republican state legislators a credible excuse to meddle; one success might have led to a cascade.
Trump used judges, county boards, state officials, and even his own Justice Department as stepping-stones to his ultimate target: Republican legislators in swing states. No one else could give him what he wanted.
Even as these efforts foundered, the Trump team achieved something crucial and enduring by convincing tens of millions of angry supporters, including a catastrophic 68 percent of all Republicans in a November PRRI poll , that the election had been stolen from Trump. Nothing close to this loss of faith in democracy has happened here before. Even Confederates recognized Abraham Lincoln’s election; they tried to secede because they knew they had lost. Delegitimating Biden’s victory was a strategic win for Trump—then and now—because the Big Lie became the driving passion of the voters who controlled the fate of Republican legislators, and Trump’s fate was in the legislators’ hands.
Even so, three strategic points of failure left Trump in dire straits in the days before January 6.
First, although Trump won broad rhetorical support from state legislators for his fictitious claims of voter fraud, they were reluctant to take the radical, concrete step of nullifying the votes of their own citizens. Despite enormous pressure, none of the six contested states put forward an alternate slate of electors for Trump. Only later, as Congress prepared to count the electoral votes, did legislators in some of those states begin talking unofficially about “decertifying” the Biden electors.
The second strategic point of failure for Trump was Congress, which had the normally ceremonial role of counting the electoral votes. In the absence of action by state legislatures, the Trump team had made a weak attempt at a fallback, arranging for Republicans in each of the six states to appoint themselves “electors” and transmit their “ballots” for Trump to the president of the Senate. Trump would have needed both chambers of Congress to approve his faux electors and hand him the presidency. Republicans controlled only the Senate, but that might have enabled Trump to create an impasse in the count. The trouble there was that fewer than a dozen Republican senators were on board.
Trump’s third strategic setback was his inability, despite all expectations, to induce his loyal No. 2 to go along. Vice President Mike Pence would preside over the Joint Session of Congress to count the electoral votes, and in a memo distributed in early January, Trump’s legal adviser John Eastman claimed, on “very solid legal authority,” that Pence himself “does the counting, including the resolution of disputed electoral votes … and all the Members of Congress can do is watch.” If Congress would not crown Trump president, in other words, Pence could do it himself. And if Pence would not do that, he could simply disregard the time limits for debate under the Electoral Count Act and allow Republicans like Senator Ted Cruz to filibuster. “That creates a stalemate,” Eastman wrote, “that would give the state legislatures more time.” Time.
The clock was ticking. Several of Trump’s advisers, Rudy Giuliani among them, told allies that friendly legislatures were on the brink of convening special sessions to replace their Biden electors. The Trump conspiracy had made nowhere near that much progress, in fact, but Giuliani was saying it could be done in “five to 10 days.” If Congress went ahead with the count on January 6, it would be too late.
On the afternoon of January 5, Sidney Powell—she of the “Kraken” lawsuits, for which she would later be sanctioned in one court and sued in another— prepared an emergency motion addressed to Justice Samuel Alito.
The motion, entered into the Supreme Court docket the next day, would go largely unnoticed by the media and the public amid the violence of January 6; few have heard of it even now. But it was Plan A to buy Trump some time.
Alito was the circuit justice for the Fifth Circuit, where Powell, on behalf of Representative Louie Gohmert, had sued to compel Mike Pence to take charge of validating electors, disregarding the statutory role of Congress. The vice president had “exclusive authority and sole discretion as to which set of electors to count or even whether to count no set of electors,” Powell wrote. The Electoral Count Act, which says quite otherwise, was unconstitutional.
Powell did not expect Alito to rule on the merits immediately. She asked him to enter an emergency stay of the electoral count and schedule briefs on the constitutional claim. If Alito granted the stay, the clock on the election would stop and Trump would gain time to twist more arms in state legislatures.
Late in the same afternoon, January 5, Steve Bannon sat behind a microphone for his live War Room show, backswept gray hair spilling from his headphones to the epaulets on a khaki field jacket. He was talking, not very guardedly, about Trump’s Plan B to buy time the next day.
“The state legislatures are the center of gravity” of the fight, he said, because “people are going back to the original interpretation of the Constitution.” And there was big news: The Republican leaders of the Pennsylvania Senate, who had resisted pressure from Trump to nullify Biden’s victory, had just signed their names to a letter averring that the commonwealth’s election results “should not have been certified by our Secretary of State.” (Bannon thanked his viewers for staging protests at those legislators’ homes in recent days.) The letter, addressed to Republican leaders in Congress, went on to “ask that you delay certification of the Electoral College to allow due process as we pursue election integrity in our Commonwealth.” For weeks, Rudy Giuliani had starred in spurious “fraud” hearings in states where Biden had won narrowly. “After all these hearings,” Bannon exulted on air, “we finally have a state legislature … that is moving.” More states, the Trump team hoped, would follow Pennsylvania’s lead.
Meanwhile, the Trumpers would use the new letter as an excuse for putting off a statutory requirement to count the electoral votes “on the sixth day of January.” Senator Cruz and several allies proposed an “emergency” 10-day delay , ostensibly for an audit.
This was a lawless plan on multiple grounds. While the Constitution gives state legislatures the power to select electors, it does not provide for “decertifying” electors after they have cast their ballots in the Electoral College, which had happened weeks before. Even if Republicans had acted earlier, they could not have dismissed electors by writing a letter. Vanishingly few legal scholars believed that a legislature could appoint substitute electors by any means after voters had made their choice. And the governing statute, the Electoral Count Act, had no provision for delay past January 6, emergency or otherwise. Trump’s team was improvising at this point, hoping that it could make new law in court, or that legal niceties would be overwhelmed by events. If Pence or the Republican-controlled Senate had fully backed Trump’s maneuver, there is a chance that they might in fact have produced a legal stalemate that the incumbent could have exploited to stay in power.
Above all else, Bannon knew that Trump had to stop the count, which was set to begin at 1 p.m. the next day. If Pence would not stop it and Alito did not come through, another way would have to be found.
“Tomorrow morning, look, what’s going to happen, we’re going to have at the Ellipse—President Trump speaks at 11,” Bannon said, summoning his posse to turn up when the gates opened at 7 a.m. Bannon would be back on air in the morning with “a lot more news and analysis of exactly what’s going to go on through the day.” Then a knowing smile crossed Bannon’s face. He swept a palm in front of him, and he said the words that would capture attention, months later, from a congressional select committee.
“I’ll tell you this,” Bannon said. “It’s not going to happen like you think it’s going to happen. Okay, it’s going to be quite extraordinarily different. All I can say is, strap in.” Earlier the same day, he had predicted, “All hell is going to break loose tomorrow.” Bannon signed off at 6:58 p.m. Later that night he turned up in another war room, this one a suite at the Willard Hotel, across the street from the White House.
He and others in Trump’s close orbit , including Eastman and Giuliani, had been meeting there for days.
Congressional investigators have been deploying subpoenas and the threat of criminal sanctions—Bannon has been indicted for contempt of Congress —to discover whether they were in direct contact with the “Stop the Steal” rally organizers and, if so, what they planned together.
Shortly after Bannon signed off, a 6-foot-3-inch mixed martial artist named Scott Fairlamb responded to his call. Fairlamb, who fought under the nickname “Wildman,” reposted Bannon’s war cry to Facebook: “All hell is going to break loose tomorrow.” The next morning, after driving before dawn from New Jersey to Washington, he posted again: “How far are you willing to go to defend our Constitution?” Fairlamb, then 43, answered the question for his own part a few hours later at the leading edge of a melee on the West Terrace of the Capitol—seizing a police baton and later punching an officer in the face. “What patriots do? We fuckin’ disarm them and then we storm the fuckin’ Capitol!” he screamed at fellow insurgents.
Less than an hour earlier, at 1:10 p.m., Trump had finished speaking and directed the crowd toward the Capitol. The first rioters breached the building at 2:11 p.m.
through a window they shattered with a length of lumber and a stolen police shield. About one minute later, Fairlamb burst through the Senate Wing Door brandishing the baton, a teeming mob behind him. (Fairlamb pleaded guilty to assaulting an officer and other charges.
) Another minute passed, and then without warning, at 2:13 , a Secret Service detail pulled Pence away from the Senate podium, hustling him out through a side door and down a short stretch of hallway.
Pause for a moment to consider the choreography. Hundreds of angry men and women are swarming through the halls of the Capitol. They are fresh from victory in hand-to-hand combat with an outnumbered force of Metropolitan and Capitol Police. Many have knives or bear spray or baseball bats or improvised cudgels. A few have thought to carry zip-tie wrist restraints. Some are shouting “Hang Mike Pence!” Others call out hated Democrats by name.
These hundreds of rioters are fanning out, intent on finding another group of roughly comparable size: 100 senators and 435 members of the House, in addition to the vice president. How long can the one group roam freely without meeting the other? Nothing short of stunning good luck, with an allowance for determined police and sound evacuation plans, prevented a direct encounter.
The vice president reached Room S-214, his ceremonial Senate office, at about 2:14 p.m. No sooner had his entourage closed the door, which is made of opaque white glass, than the leading edge of the mob reached a marble landing 100 feet away. Had the rioters arrived half a minute earlier, they could not have failed to spot the vice president and his escorts speed-walking out of the Senate chamber.
Ten minutes later, at 2:24, Trump egged on the hunt. “Mike Pence didn’t have the courage to do what should have been done to protect our Country and our Constitution,” he tweeted.
Two minutes after that, at 2:26, the Secret Service agents told Pence again what they had already said twice before: He had to move.
“The third time they came in, it wasn’t really a choice,” Marc Short, the vice president’s chief of staff, told me. “It was ‘We cannot protect you here, because all that we have between us is a glass door.’ ” When Pence refused to leave the Capitol, the agents guided him down a staircase to a shelter under the visitors’ center.
In another part of the Capitol, at about the same time, a 40-year-old businessman from Miami named Gabriel A. Garcia turned a smartphone camera toward his face to narrate the insurrection in progress. He was a first-generation Cuban American, a retired U.S. Army captain, the owner of an aluminum-roofing company, and a member of the Miami chapter of the Proud Boys, a far-right group with a penchant for street brawls. (In an August interview, Garcia described the Proud Boys as a drinking club with a passion for free speech.) In his Facebook Live video , Garcia wore a thick beard and a MAGA cap as he gripped a metal flagpole. “We just went ahead and stormed the Capitol. It’s about to get ugly,” he said.
He weaved his way to the front of a crowd that was pressing against outnumbered police in the Crypt, beneath the Rotunda. “You fucking traitors!” he screamed in their faces. When officers detained another man who tried to break through their line, Garcia dropped his flagpole and shouted “Grab him!” during a skirmish to free the detainee. “U.S.A.!” he chanted. “Storm this shit!” Then, in an ominous singsong voice, Garcia called out, “Nancy, come out and play!” Garcia was paraphrasing a villain in the 1979 urban-apocalypse film The Warriors.
That line, in the movie, precedes a brawl with switchblades, lead pipes, and baseball bats. (Garcia, who faces six criminal charges including civil disorder, has pleaded not guilty to all counts.) “It’s not like I threatened her life,” Garcia said in the interview, adding that he might not even have been talking about the speaker of the House. “I said ‘Nancy.’ Like I told my lawyer, that could mean any Nancy.” Garcia had explanations for everything on the video. “Storm this shit” meant “bring more people [to] voice their opinion.” And “‘get ugly’ is ‘we’re getting a lot of people coming behind.’ ” But the most revealing exegesis had to do with “fucking traitors.” “At that point, I wasn’t meaning the Capitol Police,” he said. “I was looking at them. But … I was talking about Congress.” He “wasn’t there to stop the certification of Biden becoming president,” he said, but to delay it. “I was there to support Ted Cruz. Senator Ted Cruz was asking for a 10-day investigation.” Delay. Buy time. Garcia knew what the mission was.
Late into the afternoon, as the violence died down and authorities regained control of the Capitol, Sidney Powell must have watched reports of the insurgency with anxious eyes on the clock. If Congress stayed out of session, there was a chance that Justice Alito might come through.
He did not. The Supreme Court denied Powell’s application the next day, after Congress completed the electoral count in the early-morning hours. Plan A and Plan B had both failed. Powell later expressed regret that Congress had been able to reconvene so quickly, mooting her request.
For a few short weeks, Republicans recoiled at the insurrection and distanced themselves from Trump. That would not last.
Ballroom A at the Treasure Island Hotel & Casino in Las Vegas is packed with college Republicans. There is a surfeit of red ties, vested suits, and pocket squares. A lot more young men than women. Two Black faces in a sea of white. No face masks at all. None of the students I ask has received a COVID vaccine.
The students have gathered to talk about the Second Amendment, the job market, and “how to attack your campus for their vaccine mandates,” as incoming Chair Will Donahue tells the crowd. Representative Paul Gosar of Arizona, a featured speaker, has another topic in mind.
“Let’s talk about January 6,” he proposes, and then, without further preamble: “Release the tapes!” There is a scattering of applause, quickly extinguished. The students do not seem to know what he is talking about.
“The 14,000-plus hours,” Gosar says. “Let’s find out who actually—who caused the turmoil. Let’s hold accountable. But let’s also make sure that the people who are innocently charged are set free. But let’s also hold those responsible for what happened accountable.” Gosar is not a natural orator, and it is often difficult to parse what he is saying. He bends at the waist and swings his head as he speaks, swallowing words and garbling syntax. No one in the Las Vegas audience seems to be following his train of thought. He moves on.
“We’re in the middle of a verbal and cultural war,” he says. “Very much like a civil war, where it’s brother against brother … We are the light. They are the darkness. Don’t shy away from that.” A little sleuthing afterward reveals that 14,000 hours is the sum of footage preserved from the Capitol’s closed-circuit video cameras between the hours of noon and 8 p.m. on January 6. The Capitol Police, according to an affidavit from their general counsel, have shared the footage with Congress and the FBI but want to keep it out of public view because the images reveal, among other sensitive information, the Capitol’s “layout, vulnerabilities and security weaknesses.” Gosar, like a few fellow conservatives, has reasoned from this that the Biden administration is concealing “exculpatory evidence” about the insurrectionists. The January 6 defendants, as Gosar portrays them in a tweet, are guilty of no more than a “stroll through statuary hall during non-business hours.” Another day he tweets, baselessly, “The violence was instigated by FBI assets.” This is the same Paul Gosar who, in November, tweeted an anime video, prepared by his staff, depicting him in mortal combat with Representative Alexandria Ocasio-Cortez. In it he raises a sword and kills her with a blow to the neck. For incitement of violence against a colleague, the House voted to censure Gosar and stripped him of his committee assignments. Gosar, unrepentant, compared himself to Alexander Hamilton.
It’s the same Paul Gosar who, twice in recent months, has purported to be in possession of secret intelligence about vote-rigging from a source in the “CIA fraud department,” which does not exist, and from the “security exchange fraud department,” and also from someone “from Fraud from the Department of Defense,” all of whom were somehow monitoring voting machines and all of whom telephoned to alert him to chicanery.
Gosar has become a leading voice of January 6 revisionism, and he may have more reason than most to revise. In an unguarded video on Periscope, since deleted but preserved by the Project on Government Oversight, Ali Alexander, one of the principal organizers of the “Stop the Steal” rally , said, “I was the person who came up with the January 6 idea with Congressman Gosar” and two other Republican House members. “We four schemed up putting maximum pressure on Congress while they were voting.” “Stop the Steal” organizers created and later tried to delete a website called Wild Protest that directed supporters to trespass on the Capitol steps, where demonstrations are illegal: “We the People must take to the US Capitol lawn and steps and tell Congress #DoNotCertify on #JAN6!” Gosar was listed on the site as a marquee name. In the final days of the Trump administration, CNN reported that Gosar (among other members of Congress) had asked Trump for a preemptive pardon for his part in the events of January 6. He did not get one. (Tom Van Flein, Gosar’s chief of staff, said in an email that both the pardon story and Alexander’s account were “categorically false.” He added, “Talking about a rally and speeches are one thing. Planning violence is another.”) Assembled in one place, the elements of the revisionist narrative from Gosar and his allies resemble a litigator’s “argument in the alternative.” January 6 was a peaceful exercise of First Amendment rights. Or it was violent, but the violence came from antifa and FBI plants. Or the violent people, the ones charged in court, are patriots and political prisoners.
Or, perhaps, they are victims of unprovoked violence themselves. “They get down there, and they get assaulted by the law-enforcement officers,” Gabriel Pollock said in an interview from behind the counter at Rapture Guns and Knives in North Lakeland, Florida, speaking of family members who are facing criminal charges. “It was an ambush, is really what it was. All of that is going to come out in the court case.” The most potent symbol of the revisionists is Ashli Babbitt, the 35-year-old Air Force veteran and QAnon adherent who died from a gunshot wound to the left shoulder as she tried to climb through a broken glass door. The shooting came half an hour after the mob’s near-encounter with Pence, and was an even closer call. This time the insurgents could see their quarry, dozens of House members clustered in the confined space of the Speaker’s Lobby. Rioters slammed fists and feet and a helmet into the reinforced glass of the barricaded doorway, eventually creating a hole big enough for Babbitt.
Whether the shooting was warranted is debatable.
Federal prosecutors cleared Lieutenant Michael Byrd of wrongdoing , and the Capitol Police exonerated him , saying, “The actions of the officer in this case potentially saved Members and staff from serious injury and possible death from a large crowd of rioters who … were steps away.” The crowd was plainly eager to follow Babbitt through the breach, but a legal analysis in Lawfare argued that the unarmed Babbitt personally would have had to pose a serious threat to justify the shooting.
Gosar helped lead the campaign to make a martyr of Babbitt, who was shot wearing a Trump flag as a cape around her neck. “Who executed Ashli Babbitt?” he asked at a House hearing in May, before Byrd’s identity was known. At another hearing, in June, he said the officer “appeared to be hiding, lying in wait, and then gave no warning before killing her.” “Was she on the right side of history?” I asked Gosar this summer.
“History has yet to be written,” he replied. “Release the tapes, and then history can be written.” As word spread in right-wing circles that the then-unidentified officer was Black, race quickly entered the narrative. Henry “Enrique” Tarrio, the leader of the Proud Boys, shared a Telegram message from another user that said, “This black man was waiting to execute someone on january 6th. He chose Ashli Babbitt.” An account called “Justice for January 6” tweeted that Byrd “should be in jail for the execution of Ashli Babbitt, but instead he is being lauded as a hero. The ONLY racial injustice in America today is antiwhiteism.
” Ibram X. Kendi: “Anti-white” and the mantra of white supremacy The penultimate stage of the new narrative held that Democrats had seized upon false accusations of rebellion in order to unleash the “deep state” against patriotic Americans. Dylan Martin, a student leader at the Las Vegas event at which Gosar spoke, adopted that view. “The Democratic Party seems to be using [January 6] as a rallying cry to persecute and completely use the force of the federal government to clamp down on conservatives across the nation,” he told me.
Trump himself proposed the final inversion of January 6 as a political symbol: “The insurrection took place on November 3, Election Day. January 6 was the Protest!” he wrote in a statement released by his fundraising group in October.
It is difficult today to find a Republican elected official who will take issue with that proposition in public. With Trump loyalists ascendant, no room is left for dissent in a party now fully devoted to twisting the electoral system for the former president. Anyone who thinks otherwise need only glance toward Wyoming, where Liz Cheney, so recently in the party’s power elite, has been toppled from her leadership post and expelled from the state Republican Party for lèse-majesté.
In the first days of January 2021, as Trump and his legal advisers squeezed Pence to stop the electoral count, they told the vice president that state legislatures around the country were on the cusp of replacing electors who’d voted for Biden with those who would vote for Trump. They were lying, but they were trying mightily to make it true.
Marc Short, Pence’s closest adviser, did not think it would happen. “In any sort of due diligence that we did with a Senate majority leader, a House minority leader, or any of those people, it was clear that they had certified their results and there was no intention of a separate slate of electors or any sort of challenge to that certification,” he told me. Trump might have support for his maneuver from “one or two” legislators in a given state, “but that was never something that actually garnered the support of a majority of any elected body.” The letter from wavering Pennsylvania state senators suggests that the situation wasn’t quite so black-and-white; the dams were beginning to crack. Even so, Trump’s demand—that statehouses fire their voters and hand him the votes—was so far beyond the bounds of normal politics that politicians found it difficult to conceive.
With the passage of a year, it is no longer so hard. There is precedent now for the conversation, the next time it happens, and there are competent lawyers to smooth the path. Most of all, there is the roaring tide of revanchist anger among Trump supporters, rising up against anyone who would thwart his will. Scarcely an elected Republican dares resist them, and many surf exultantly in their wake.
A year ago I asked the Princeton historian Kevin Kruse how he explained the integrity of the Republican officials who said no, under pressure, to the attempted coup in 2020 and early ’21. “I think it did depend on the personalities,” he told me. “I think you replace those officials, those judges, with ones who are more willing to follow the party line, and you get a different set of outcomes.” Today that reads like a coup plotter’s to-do list. Since the 2020 election, Trump’s acolytes have set about methodically identifying patches of resistance and pulling them out by the roots. Brad Raffensperger in Georgia, who refused to “find” extra votes for Trump? Formally censured by his state party, primaried, and stripped of his power as chief election officer. Aaron Van Langevelde in Michigan, who certified Biden’s victory? Hounded off the Board of State Canvassers. Governor Doug Ducey in Arizona, who signed his state’s “certificate of ascertainment” for Biden? Trump has endorsed a former Fox 10 news anchor named Kari Lake to succeed him, predicting that she “will fight to restore Election Integrity (both past and future!).” Future , here, is the operative word. Lake says she would not have certified Biden’s victory in Arizona, and even promises to revoke it (somehow) if she wins. None of this is normal.
Arizona’s legislature, meanwhile, has passed a law forbidding Katie Hobbs, the Democratic secretary of state, to take part in election lawsuits, as she did at crucial junctures last year. The legislature is also debating an extraordinary bill asserting its own prerogative, “by majority vote at any time before the presidential inauguration,” to “revoke the secretary of state’s issuance or certification of a presidential elector’s certificate of election.” There was no such thing under law as a method to “decertify” electors when Trump demanded it in 2020, but state Republicans think they have invented one for 2024.
In at least 15 more states , Republicans have advanced new laws to shift authority over elections from governors and career officials in the executive branch to the legislature. Under the Orwellian banner of “election integrity,” even more have rewritten laws to make it harder for Democrats to vote. Death threats and harassment from Trump supporters have meanwhile driven nonpartisan voting administrators to contemplate retirement.
Vernetta Keith Nuriddin, 52, who left the Fulton County, Georgia, election board in June, told me she had been bombarded with menacing emails from Trump supporters. One email, she recalled, said, “You guys need to be publicly executed … on pay per view.” Another, a copy of which she provided me, said, “Tick, Tick, Tick” in the subject line and “Not long now” as the message. Nuriddin said she knows colleagues on at least four county election boards who resigned in 2021 or chose not to renew their positions.
Georgia Governor Brian Kemp, excommunicated and primaried at Trump’s behest for certifying Biden’s victory, nonetheless signed a new law in March that undercuts the power of the county authorities who normally manage elections. Now a GOP-dominated state board, beholden to the legislature, may overrule and take control of voting tallies in any jurisdiction—for example, a heavily Black and Democratic one like Fulton County. The State Election Board can suspend a county board if it deems the board to be “underperforming” and replace it with a handpicked administrator. The administrator, in turn, will have final say on disqualifying voters and declaring ballots null and void. Instead of complaining about balls and strikes, Team Trump will now own the referee.
“The best-case scenario is [that in] the next session this law is overturned,” Nuriddin said. “The worst case is they start just pulling election directors across the state.” The Justice Department has filed suit to overturn some provisions of the new Georgia law —but not to challenge the hostile takeover of election authorities. Instead, the federal lawsuit takes issue with a long list of traditional voter-suppression tactics that, according to Attorney General Merrick Garland, have the intent and effect of disadvantaging Black voters. These include prohibitions and “onerous fines” that restrict the distribution of absentee ballots, limit the use of ballot drop boxes, and forbid handing out food or water to voters waiting in line. These provisions make it harder, by design, for Democrats to vote in Georgia. The provisions that Garland did not challenge make it easier for Republicans to fix the outcome. They represent danger of a whole different magnitude.
The coming midterm elections, meanwhile, could tip the balance further. Among the 36 states that will choose new governors in 2022, three are presidential battlegrounds—Pennsylvania, Wisconsin, and Michigan—where Democratic governors until now have thwarted attempts by Republican legislatures to cancel Biden’s victory and rewrite election rules. Republican challengers in those states have pledged allegiance to the Big Lie, and the contests look to be competitive. In at least seven states, Big Lie Republicans have been vying for Trump’s endorsement for secretary of state, the office that will oversee the 2024 election. Trump has already endorsed three of them, in the battleground states of Arizona, Georgia, and Michigan.
Down in the enlisted ranks, Trump’s army of the dispossessed is hearing language from Republican elected officials that validates an instinct for violence. Angry rhetoric comparing January 6 to 1776 (Representative Lauren Boebert) or vaccine requirements to the Holocaust (Kansas House Representative Brenda Landwehr) reliably produces death threats by the hundreds against perceived enemies—whether Democratic or Republican.
The infinite scroll of right-wing social media is relentlessly bloody-minded. One commentator on Telegram posted on January 7 that “the congress is literally begging the people to hang them.” Another replied, “Anyone who certifies a fraudulent election has commited treason punishable by death.” One week later came, “The last stand is a civil war.” In response, another user wrote, “No protests. To late for that.” The fire burns, if anything, even hotter now, a year later.
Amid all this ferment, Trump’s legal team is fine-tuning a constitutional argument that is pitched to appeal to a five-justice majority if the 2024 election reaches the Supreme Court. This, too, exploits the GOP advantage in statehouse control. Republicans are promoting an “independent state legislature” doctrine, which holds that statehouses have “plenary,” or exclusive, control of the rules for choosing presidential electors. Taken to its logical conclusion, it could provide a legal basis for any state legislature to throw out an election result it dislikes and appoint its preferred electors instead.
Elections are complicated, and election administrators have to make hundreds of choices about election machinery and procedures—the time, place, and manner of voting or counting or canvassing—that the legislature has not specifically authorized. A judge or county administrator may hold polls open for an extra hour to make up for a power outage that temporarily halts voting. Precinct workers may exercise their discretion to help voters “cure” technical errors on their ballots. A judge may rule that the state constitution limits or overrides a provision of state election law.
Four justices—Alito, Neil Gorsuch, Brett Kavanaugh, and Clarence Thomas—have already signaled support for a doctrine that disallows any such deviation from the election rules passed by a state legislature. It is an absolutist reading of legislative control over the “manner” of appointing electors under Article II of the U.S. Constitution. Justice Amy Coney Barrett, Trump’s last appointee, has never opined on the issue.
The question could arise, and Barrett’s vote could become decisive, if Trump again asks a Republican-controlled legislature to set aside a Democratic victory at the polls. Any such legislature would be able to point to multiple actions during the election that it had not specifically authorized. To repeat, that is the norm for how elections are carried out today. Discretionary procedures are baked into the cake. A Supreme Court friendly to the doctrine of independent state legislatures would have a range of remedies available to it; the justices might, for instance, simply disqualify the portion of the votes that were cast through “unauthorized” procedures. But one of those remedies would be the nuclear option: throwing out the vote altogether and allowing the state legislature to appoint electors of its choosing.
Trump is not relying on the clown-car legal team that lost nearly every court case last time. The independent-state-legislature doctrine has a Federalist Society imprimatur and attorneys from top-tier firms like BakerHostetler. A dark-money voter-suppression group that calls itself the Honest Elections Project has already featured the argument in an amicus brief.
“One of the minimal requirements for a democracy is that popular elections will determine political leadership,” Nate Persily, a Stanford Law School expert on election law, told me. “If a legislature can effectively overrule the popular vote, it turns democracy on its head.” Persily and UC Irvine’s Hasen, among other election-law scholars, fear that the Supreme Court could take an absolutist stance that would do exactly that.
One sign that legislative supremacy is more than a hypothetical construct is that it has migrated into the talking points of Republican elected officials. On ABC’s This Week , for example, while refusing to opine on whether Biden had stolen the election, House Minority Whip Steve Scalise explained in February 2021, “There were a few states that did not follow their state laws. That’s really the dispute that you’ve seen continue on.” Trump himself has absorbed enough of the argument to tell the Washington Post reporters Carol Leonnig and Philip Rucker, “The legislatures of the states did not approve all of the things that were done for those elections. And under the Constitution of the United States, they have to do that.” There is a clear and present danger that American democracy will not withstand the destructive forces that are now converging upon it. Our two-party system has only one party left that is willing to lose an election. The other is willing to win at the cost of breaking things that a democracy cannot live without.
Democracies have fallen before under stresses like these, when the people who might have defended them were transfixed by disbelief. If ours is to stand, its defenders have to rouse themselves.
Joe Biden looked as though he might do that on the afternoon of July 13. He traveled to the National Constitution Center in Philadelphia, which features on its facade an immense reproduction of the Preamble in 18th-century script, to deliver what was billed as a major address on democracy.
What followed was incongruous. Biden began well enough, laying out how the core problem of voting rights had changed. It was “no longer just about who gets to vote” but “who gets to count the vote.” There were “partisan actors” seizing power from independent election authorities. “To me, this is simple: This is election subversion,” he said. “They want the ability to reject the final count and ignore the will of the people if their preferred candidate loses.” He described the means by which the next election might be stolen, though vaguely: “You vote for certain electors to vote for somebody for president” and then a “state legislator comes along … and they say, ‘No, we don’t like those electors. We’re going to appoint other electors who are going to vote for the other guy or other woman.’ ” And he laid down a strong marker as he reached his rhetorical peak.
“We’re facing the most significant test of our democracy since the Civil War. That’s not hyperbole,” he said. “I’m not saying this to alarm you. I’m saying this because you should be alarmed.” But then, having looked directly toward the threat on the horizon, Biden seemed to turn away, as if he doubted the evidence before his eyes. There was no appreciable call to action, save for the bare words themselves: “We’ve got to act.” Biden’s list of remedies was short and grossly incommensurate with the challenge. He expressed support for two bills—the For the People Act and the John Lewis Voting Rights Advancement Act—that were dead on arrival in the Senate because Democrats had no answer to the Republican filibuster. He said the attorney general would double the Department of Justice staff devoted to voting-rights enforcement. Civil-rights groups would “stay vigilant.” Vice President Kamala Harris would lead “an all-out effort to educate voters about the changing laws, register them to vote, and then get the vote out.” And then he mentioned one last plan that proved he did not accept the nature of the threat: “We will be asking my Republican friends—in Congress, in states, in cities, in counties—to stand up, for God’s sake, and help prevent this concerted effort to undermine our elections and the sacred right to vote.” So: enforcement of inadequate laws, wishful thinking about new laws, vigilance, voter education, and a friendly request that Republicans stand athwart their own electoral schemes.
Conspicuously missing from Biden’s speech was any mention even of filibuster reform, without which voting-rights legislation is doomed. Nor was there any mention of holding Trump and his minions accountable, legally, for plotting a coup. Patterson, the retired firefighter, was right to say that nobody has been charged with insurrection; the question is, why not? The Justice Department and the FBI are chasing down the foot soldiers of January 6, but there is no public sign that they are building cases against the men and women who sent them. Absent consequences, they will certainly try again. An unpunished plot is practice for the next.
Donald Trump came closer than anyone thought he could to toppling a free election a year ago. He is preparing in plain view to do it again, and his position is growing stronger. Republican acolytes have identified the weak points in our electoral apparatus and are methodically exploiting them. They have set loose and now are driven by the animus of tens of millions of aggrieved Trump supporters who are prone to conspiracy thinking, embrace violence, and reject democratic defeat. Those supporters, Robert Pape’s “committed insurrectionists,” are armed and single-minded and will know what to do the next time Trump calls upon them to act.
Democracy will be on trial in 2024. A strong and clear-eyed president, faced with such a test, would devote his presidency to meeting it. Biden knows better than I do what it looks like when a president fully marshals his power and resources to face a challenge. It doesn’t look like this.
The midterms, marked by gerrymandering, will more than likely tighten the GOP’s grip on the legislatures in swing states. The Supreme Court may be ready to give those legislatures near-absolute control over the choice of presidential electors. And if Republicans take back the House and Senate, as oddsmakers seem to believe they will, the GOP will be firmly in charge of counting the electoral votes.
Against Biden or another Democratic nominee, Donald Trump may be capable of winning a fair election in 2024. He does not intend to take that chance.
Joe Stephens contributed research and reporting.
This article appears in the January/February 2022 print edition with the headline “January 6 Was Practice.” It has been updated to clarify that the group formed in 1969 was the Provisional IRA (the original IRA was created in 1919).
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
" |
254 | 2,023 | "Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/jul/25/joseph-weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai" | "Computer scientist Joseph Weizenbaum was there at the dawn of artificial intelligence – but he was also adamant that we must never confuse computers with humans News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing Illustration: Mark Harris/The Guardian The long read Weizenbaum’s nightmares: how the inventor of the first chatbot turned against AI Illustration: Mark Harris/The Guardian Computer scientist Joseph Weizenbaum was there at the dawn of artificial intelligence – but he was also adamant that we must never confuse computers with humans Ben Tarnoff Tue 25 Jul 2023 00.00 EDT I n 1966, an MIT professor named Joseph Weizenbaum created the first chatbot. He cast it in the role of a psychotherapist. A user would type a message on an electric typewriter connected to a mainframe. After a moment, the “psychotherapist” would reply.
User: Men are all alike.
Chatbot: IN WHAT WAY User: They’re always bugging us about something or other.
Chatbot: CAN YOU THINK OF A SPECIFIC EXAMPLE User: Well, my boyfriend made me come here.
Chatbot: YOUR BOYFRIEND MADE YOU COME HERE User: He says I’m depressed much of the time.
Chatbot: I AM SORRY TO HEAR YOU ARE DEPRESSED Weizenbaum published this sample exchange in a journal article that explained how the chatbot worked. The software was relatively simple. It looked at the user input and applied a set of rules to generate a plausible response. He called the program Eliza, after Eliza Doolittle in Pygmalion. The cockney flower girl in George Bernard Shaw’s play uses language to produce an illusion: she elevates her elocution to the point where she can pass for a duchess. Similarly, Eliza would speak in such a way as to produce the illusion that it understood the person sitting at the typewriter.
“Some subjects have been very hard to convince that Eliza (with its present script) is not human,” Weizenbaum wrote. In a follow-up article that appeared the next year, he was more specific: one day, he said, his secretary requested some time with Eliza. After a few moments, she asked Weizenbaum to leave the room. “I believe this anecdote testifies to the success with which the program maintains the illusion of understanding,” he noted.
Eliza isn’t exactly obscure. It caused a stir at the time – the Boston Globe sent a reporter to go and sit at the typewriter and ran an excerpt of the conversation – and remains one of the best known developments in the history of computing. More recently, the release of ChatGPT has renewed interest in it. In the last year, Eliza has been invoked in the Guardian, the New York Times, the Atlantic and elsewhere. The reason that people are still thinking about a piece of software that is nearly 60 years old has nothing to do with its technical aspects, which weren’t terribly sophisticated even by the standards of its time. Rather, Eliza illuminated a mechanism of the human mind that strongly affects how we relate to computers.
Early in his career, Sigmund Freud noticed that his patients kept falling in love with him. It wasn’t because he was exceptionally charming or good-looking, he concluded. Instead, something more interesting was going on: transference. Briefly, transference refers to our tendency to project feelings about someone from our past on to someone in our present. While it is amplified by being in psychoanalysis, it is a feature of all relationships. When we interact with other people, we always bring a group of ghosts to the encounter. The residue of our earlier life, and above all our childhood, is the screen through which we see one another.
This concept helps make sense of people’s reactions to Eliza. Weizenbaum had stumbled across the computerised version of transference, with people attributing understanding, empathy and other human characteristics to software. While he never used the term himself, he had a long history with psychoanalysis that clearly informed how he interpreted what would come to be called the “Eliza effect”.
As computers have become more capable, the Eliza effect has only grown stronger. Take the way many people relate to ChatGPT. Inside the chatbot is a “large language model”, a mathematical system that is trained to predict the next string of characters, words, or sentences in a sequence. What distinguishes ChatGPT is not only the complexity of the large language model that underlies it, but its eerily conversational voice. As Colin Fraser, a data scientist at Meta, has put it , the application is “designed to trick you, to make you think you’re talking to someone who’s not actually there”.
But the Eliza effect is far from the only reason to return to Weizenbaum. His experience with the software was the beginning of a remarkable journey. As an MIT professor with a prestigious career, he was, in his words, a “high priest, if not a bishop, in the cathedral to modern science”. But by the 1970s, Joseph Weizenbaum had become a heretic, publishing articles and books that condemned the worldview of his colleagues and warned of the dangers posed by their work. Artificial intelligence, he came to believe, was an “index of the insanity of our world.” Today, the view that artificial intelligence poses some kind of threat is no longer a minority position among those working on it. There are different opinions on which risks we should be most worried about, but many prominent researchers, from Timnit Gebru to Geoffrey Hinton – both ex-Google computer scientists – share the basic view that the technology can be toxic. Weizenbaum’s pessimism made him a lonely figure among computer scientists during the last three decades of his life; he would be less lonely in 2023.
There is so much in Weizenbaum’s thinking that is urgently relevant now. Perhaps his most fundamental heresy was the belief that the computer revolution, which Weizenbaum not only lived through but centrally participated in, was actually a counter-revolution. It strengthened repressive power structures instead of upending them. It constricted rather than enlarged our humanity, prompting people to think of themselves as little more than machines. By ceding so many decisions to computers, he thought, we had created a world that was more unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code.
Weizenbaum liked to say that every person is the product of a particular history. His ideas bear the imprint of his own particular history, which was shaped above all by the atrocities of the 20th century and the demands of his personal demons. Computers came naturally to him. The hard part, he said, was life.
W hat it means to be human – and how a human is different from a computer – was something Weizenbaum spent a lot of time thinking about. So it’s fitting that his own humanity was up for debate from the start. His mother had a difficult labour, and felt some disappointment at the result. “When she was finally shown me, she thought I was a bloody mess and hardly looked human,” Weizenbaum later recalled. “She couldn’t believe this was supposed to be her child.” He was born in 1923, the youngest son of an assimilated, upper-middle class Jewish family in Berlin. His father, Jechiel, who had emigrated to Germany from Galicia, which spanned what is now south-eastern Poland and western Ukraine, at the age of 12, was an accomplished furrier who had acquired a comfortable foothold in society, a nice apartment, and a much younger Viennese wife (Weizenbaum’s mother). From the start, Jechiel treated his son with a contempt that would haunt Weizenbaum for the rest of his life. “My father was absolutely convinced that I was a worthless moron, a complete fool, that I would never become anything,” Weizenbaum later told the documentary film-makers Peter Haas and Silvia Holzinger.
By the time he was old enough to make memories, the Nazis were everywhere. His family lived near a bar frequented by Hitler’s paramilitaries, the SA, and sometimes he would see people getting dragged inside to be beaten up in the backroom. Once, while he was out with his nanny, columns of armed communists and Nazis lined up and started shooting at each other. The nanny pushed him under a parked car until the bullets stopped flying.
Shortly after Hitler became chancellor in 1933, the government passed a law that severely restricted the number of Jews in public schools. Weizenbaum had to transfer to a Jewish boys’ school. It was here that he first came into contact with the Ostjuden : Jews from eastern Europe, poor, dressed in rags, speaking Yiddish. To Weizenbaum, they may as well have come from Mars. Nevertheless, the time he spent with them gave him what he later described as “a new feeling of camaraderie”, as well as a “sensitivity for oppression”.
He became deeply attached to one of his classmates in particular. “If fate had been different, I would have developed a homosexual love for this boy,” he later said. The boy “led me into his world”, the world of the Jewish ghetto around Berlin’s Grenadierstrasse. “They had nothing, owned nothing, but somehow supported each other,” he recalled.
One day, he brought the boy back to his family’s apartment. His father, himself once a poor Jewish boy from eastern Europe, was disgusted and furious. Jechiel was very proud, Weizenbaum remembered – and he had reason to be, given the literal and figurative distances he had travelled from the shtetl. Now his son was bringing the shtetl back into his home.
Alienated from his parents, richer than his classmates, and a Jew in Nazi Germany: Weizenbaum felt comfortable nowhere. His instinct, he said, was always to “bite the hand that fed me”, to provoke the paternal figure, to be a pain in the backside. And this instinct presumably proceeded from the lesson he learned from his father’s hostility toward him and bigotry toward the boy he loved: that danger could lie within one’s home, people, tribe.
I n 1936, the family left Germany suddenly, possibly because Jechiel had slept with the girlfriend of an SA member. Weizenbaum’s aunt owned a bakery in Detroit, so that’s where they went. At 13, he found himself 4,000 miles from everything he knew. “I was very, very lonely,” he recalled. School became a refuge from reality – specifically algebra, which didn’t require English, which he didn’t speak at first. “Of all the things that one could study,” he later said, “mathematics seemed by far the easiest. Mathematics is a game. It is entirely abstract.” In his school’s metalworking class, he learned to operate a lathe. The experience brought him out of his brain and into his body. About 70 years later, he looked back on the realisation prompted by this new skill: that intelligence “isn’t just in the head but also in the arm, in the wrist, in the hand”. Thus, at a young age, two concepts were in place that would later steer his career as a practitioner and critic of AI: on the one hand, an appreciation for the pleasures of abstraction; on the other, a suspicion of those pleasures as escapist, and a related understanding that human intelligence exists in the whole person and not in any one part.
In 1941, Weizenbaum enrolled at the local public university. Wayne University was a working-class place: cheap to attend, filled with students holding down full-time jobs. The seeds of social consciousness that had been planted in Berlin started to grow: Weizenbaum saw parallels between the oppression of Black people in Detroit and that of the Jews under Hitler. This was also a time of incandescent class struggle in the city – the United Auto Workers union won its first contract with Ford the same year that Weizenbaum entered college.
Weizenbaum’s growing leftwing political commitments complicated his love of mathematics. “I wanted to do something for the world or society,” he remembered. “To study plain mathematics, as if the world were doing fine, or even didn’t exist at all – that’s not what I wanted.” He soon had his chance. In 1941, the US entered the second world war; the following year, Weizenbaum was drafted. He spent the next five years working as a meteorologist for the Army Air corps, stationed on different bases across the US. The military was a “salvation”, he later said. What fun, to get free of his family and fight Hitler at the same time.
While home on furlough, he began a romance with Selma Goode, a Jewish civil rights activist and early member of the Democratic Socialists of America. Before long they were married, with a baby boy, and after the war Weizenbaum moved back to Detroit. There, he resumed his studies at Wayne, now financed by the federal government through the GI Bill.
Then, in the late 1940s, the couple got divorced, with Goode taking custody of their son. “That was incredibly tragic for me,” Weizenbaum later said. “It took me a long time to get over it.” His mental state was forever unsteady: his daughter Pm – pronounced “Pim” and named after the New York leftwing daily newspaper PM – told me that he had been hospitalised for anorexia during his time at university. Everything he did, he felt he did badly. In the army he was promoted to sergeant and honourably discharged; nonetheless, he left convinced that he had somehow hindered the war effort. He later attributed his self-doubt to his father constantly telling him he was worthless. “If something like that is repeated to you as a child, you end up believing it yourself,” he reflected.
In the wake of the personal crisis produced by Selma’s departure came two consequential first encounters. He went into psychoanalysis and he went into computing.
Eniac, one of the world’s first electronic digital computers, circa 1945.
In those days, a computer, like a psyche, was an interior. “You didn’t go to the computer,” Weizenbaum said in a 2010 documentary. “Instead, you went inside of it.” The war had provided the impetus for building gigantic machines that could mechanise the hard work of mathematical calculation. Computers helped crack Nazi encryption and find the best angles for aiming artillery. The postwar consolidation of the military-industrial complex, in the early days of the cold war, drew large sums of US government money into developing the technology. By the late 1940s, the fundamentals of the modern computer were in place.
But it still wasn’t easy to get one. So one of Weizenbaum’s professors resolved to build his own. He assembled a small team of students and invited Weizenbaum to join. Constructing the computer, Weizenbaum grew happy and purposeful. “I was full of life and enthusiastic about my work,” he remembered. Here were the forces of abstraction that he first encountered in middle-school algebra. Like algebra, a computer modelled, and thereby simplified, reality – yet it could do so with such fidelity that one could easily forget that it was only a representation. Software also imparted a sense of mastery. “The programmer has a kind of power over a stage incomparably larger than that of a theatre director,” he later said in the 2007 documentary Rebel at Work. “Bigger than that of Shakespeare.” About this time, Weizenbaum met a schoolteacher named Ruth Manes. In 1952, they married and moved into a small apartment near the university. She “couldn’t have been further from him culturally”, their daughter Miriam told me. She wasn’t a Jewish socialist like his first wife – her family was from the deep south. Their marriage represented “a reach for normalcy and a settled life” on his part, Miriam said. His political passions cooled.
By the early 1960s, Weizenbaum was working as a programmer for General Electric in Silicon Valley. He and Ruth were raising three daughters and would soon have a fourth. At GE, he built a computer for the Navy that launched missiles and a computer for Bank of America that processed cheques. “It never occurred to me at the time that I was cooperating in a technological venture which had certain social side effects which I might come to regret,” he later said.
I n 1963, the prestigious Massachusetts Institute of Technology called. Would he like to join the faculty as a visiting associate professor? “That was like offering a young boy the chance to work in a toy factory that makes toy trains,” Weizenbaum remembered.
The computer that Weizenbaum had helped build in Detroit was an ogre, occupying an entire lecture hall and exhaling enough heat to keep the library warm in winter. Interacting with it involved a set of highly structured rituals: you wrote out a program by hand, encoded it as a pattern of holes on punch cards, and then ran the cards through the computer. This was standard operating procedure in the technology’s early days, making programming fiddly and laborious.
MIT’s computer scientists sought an alternative. In 1963, with a $2.2m grant from the Pentagon, the university launched Project MAC – an acronym with many meanings, including “machine-aided cognition”. The plan was to create a computer system that was more accessible and responsible to individual needs.
To that end, the computer scientists perfected a technology called “time-sharing”, which enabled the kind of computing we take for granted today. Rather than loading up a pile of punch cards and returning the next day to see the result, you could type in a command and get an immediate response. Moreover, multiple people could use a single mainframe simultaneously from individual terminals, which made the machines seem more personal.
With time-sharing came a new type of software. The programs that ran on MIT’s system included those for sending messages from one user to another (a precursor of email), editing text (early word processing) and searching a database with 15,000 journal articles (a primitive JSTOR). Time-sharing also changed how people wrote programs. The technology made it possible “to interact with the computer conversationally,” Weizenbaum later recalled. Software development could unfold as a dialogue between programmer and machine: you try a bit of code, see what comes back, then try a little more.
Weizenbaum wanted to go further. What if you could converse with a computer in a so-called natural language, like English? This was the question that guided the creation of Eliza, the success of which made his name at the university and helped him secure tenure in 1967. It also brought Weizenbaum into the orbit of MIT’s Artificial Intelligence Project, which had been set up in 1958 by John McCarthy and Marvin Minsky.
McCarthy had coined the phrase “artificial intelligence” a few years earlier when he needed a title for an academic workshop. The phrase was neutral enough to avoid overlap with existing areas of research like cybernetics, amorphous enough to attract cross-disciplinary contributions, and audacious enough to convey his radicalism (or, if you like, arrogance) about what machines were capable of. This radicalism was affirmed in the original workshop proposal. “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it,” it asserted.
Marvin Minsky in the early 1980s.
Minsky was bullish and provocative; one of his favourite gambits was to declare the human brain nothing but a “meat machine” whose functions could be reproduced, or even surpassed, by human-made machines. Weizenbaum disliked him from the start. It wasn’t his faith in the capabilities of technology that bothered Weizenbaum; he himself had seen computers progress immensely by the mid-1960s. Rather, Weizenbaum’s trouble with Minsky, and with the AI community as a whole, came down to a fundamental disagreement about the nature of the human condition.
In Weizenbaum’s 1967 follow-up to his first article about Eliza, he argued that no computer could ever fully understand a human being. Then he went one step further: no human being could ever fully understand another human being. Everyone is formed by a unique collection of life experiences that we carry around with us, he argued, and this inheritance places limits on our ability to comprehend one another. We can use language to communicate, but the same words conjure different associations for different people – and some things can’t be communicated at all. “There is an ultimate privacy about each of us that absolutely precludes full communication of any of our ideas to the universe outside ourselves,” Weizenbaum wrote.
This was a very different perspective than that of Minsky or McCarthy. It clearly bore the influence of psychoanalysis. Here was the mind not as a meat machine but as a psyche – something with depth and strangeness. If we are often opaque to one another and even to ourselves, what hope is there for a computer to know us? Yet, as Eliza illustrated, it was surprisingly easy to trick people into feeling that a computer did know them – and into seeing that computer as human. Even in his original 1966 article, Weizenbaum had worried about the consequences of this phenomenon, warning that it might lead people to regard computers as possessing powers of “judgment” that are “deserving of credibility”. “A certain danger lurks there,” he wrote.
In the mid-1960s, this was as far as he was willing to go. He pointed to a danger, but didn’t dwell on it. He was, after all, a depressed kid who had escaped the Holocaust, who always felt like an impostor, but who had found prestige and self-worth in the high temple of technology. It can be hard to admit that something you are good at, something you enjoy, is bad for the world – and even harder to act on that knowledge. For Weizenbaum, it would take a war to know what to do next.
O n 4 March 1969, MIT students staged a one-day “research stoppage” to protest the Vietnam war and their university’s role in it. People braved the snow and cold to pile into Kresge Auditorium in the heart of campus for a series of talks and panels that had begun the night before. Noam Chomsky spoke, as did the anti-war senator George McGovern. Student activism had been growing at MIT, but this was the largest demonstration to date, and it received extensive coverage in the national press. “The feeling in 1969 was that scientists were complicit in a great evil, and the thrust of 4 March was how to change it,” one of the lead organisers later wrote.
Weizenbaum supported the action and became strongly affected by the political dynamism of the time. “It wasn’t until the merger of the civil rights movement, the war in Vietnam, and MIT’s role in weapons development that I became critical,” he later explained in an interview. “And once I started thinking along those lines, I couldn’t stop.” In the last years of his life, he would reflect on his politicisation during the 1960s as a return to the social consciousness of his leftist days in Detroit and his experiences in Nazi Germany: “I stayed true to who I was,” he told the German writer Gunna Wendt.
He began to think about the German scientists who had lent their expertise to the Nazi regime. “I had to ask myself: do I want to play that kind of role?” he remembered in 1995. He had two choices. One was to “push all this sort of thinking down”, to repress it. The other was “to look at it seriously”.
Looking at it seriously would require examining the close ties between his field and the war machine that was then dropping napalm on Vietnamese children. Defense Secretary Robert McNamara championed the computer as part of his crusade to bring a mathematical mindset to the Pentagon. Data, sourced from the field and analysed with software, helped military planners decide where to put troops and where to drop bombs.
A protest against the Vietnam war at the Massachusetts Institute of Technology in November 1969.
By 1969, MIT was receiving more money from the Pentagon than any other university in the country. Its labs pursued a number of projects designed for Vietnam, such as a system to stabilise helicopters in order to make it easier for a machine-gunner to obliterate targets in the jungle below. Project MAC – under whose auspices Weizenbaum had created Eliza – had been funded since its inception by the Pentagon.
As Weizenbaum wrestled with this complicity, he found that his colleagues, for the most part, didn’t care about the purposes to which their research might be put. If we don’t do it, they told him, somebody else will. Or: scientists don’t make policy, leave that to the politicians. Weizenbaum was again reminded of the scientists in Nazi Germany who insisted that their work had nothing to do with politics.
Consumed by a sense of responsibility, Weizenbaum dedicated himself to the anti-war movement. “He got so radicalised that he didn’t really do much computer research at that point,” his daughter Pm told me. Instead, he joined street demonstrations and met anti-war students. Where possible, he used his status at MIT to undermine the university’s opposition to student activism. After students occupied the president’s office in 1970, Weizenbaum served on the disciplinary committee. According to his daughter Miriam, he insisted on a strict adherence to due process, thereby dragging out the proceedings as long as possible so that students could graduate with their degrees.
It was during this period that certain unresolved questions about Eliza began to bother him more acutely. Why had people reacted so enthusiastically and so delusionally to the chatbot, especially those experts who should know better? Some psychiatrists had hailed Eliza as the first step toward automated psychotherapy; some computer scientists had celebrated it as a solution to the problem of writing software that understood language. Weizenbaum became convinced that these responses were “symptomatic of deeper problems” – problems that were linked in some way to the war in Vietnam. And if he wasn’t able to figure out what they were, he wouldn’t be able to keep going professionally.
I n 1976, Weizenbaum published his magnum opus: Computer Power and Human Reason: From Judgment to Calculation. “The book has overwhelmed me, like being crashed over by the sea,” read a blurb from the libertarian activist Karl Hess. The book is indeed overwhelming. It is a chaotic barrage of often brilliant thoughts about computers. A glimpse at the index reveals the range of Weizenbaum’s interlocutors: not only colleagues like Minsky and McCarthy but the political philosopher Hannah Arendt, the critical theorist Max Horkheimer, and the experimental playwright Eugène Ionesco. He had begun work on the book after completing a fellowship at Stanford University, in California, where he enjoyed no responsibilities, a big office and lots of stimulating discussions with literary critics, philosophers and psychiatrists. With Computer Power and Human Reason, he wasn’t so much renouncing computer science as trying to break it open and let alternative traditions come pouring in.
The book has two major arguments. First: “There is a difference between man and machine.” Second: “There are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them.” The book’s subtitle – From Judgment to Calculation – offers a clue as to how these two statements fit together.
For Weizenbaum, judgment involves choices that are guided by values. These values are acquired through the course of our life experience and are necessarily qualitative: they cannot be captured in code. Calculation, by contrast, is quantitative. It uses a technical calculus to arrive at a decision. Computers are only capable of calculation, not judgment. This is because they are not human, which is to say, they do not have a human history – they were not born to mothers, they did not have a childhood, they do not inhabit human bodies or possess a human psyche with a human unconscious – and so do not have the basis from which to form values.
And that would be fine, if we confined computers to tasks that only required calculation. But thanks in large part to a successful ideological campaign waged by what he called the “artificial intelligentsia”, people increasingly saw humans and computers as interchangeable. As a result, computers had been given authority over matters in which they had no competence. (It would be a “monstrous obscenity”, Weizenbaum wrote, to let a computer perform the functions of a judge in a legal setting or a psychiatrist in a clinical one.) Seeing humans and computers as interchangeable also meant that humans had begun to conceive of themselves as computers, and so to act like them. They mechanised their rational faculties by abandoning judgment for calculation, mirroring the machine in whose reflection they saw themselves.
This had especially destructive policy consequences. Powerful figures in government and business could outsource decisions to computer systems as a way to perpetuate certain practices while absolving themselves of responsibility. Just as the bomber pilot “is not responsible for burned children because he never sees their village”, Weizenbaum wrote, software afforded generals and executives a comparable degree of psychological distance from the suffering they caused.
Letting computers make more decisions also shrank the range of possible decisions that could be made. Bound by an algorithmic logic, software lacked the flexibility and the freedom of human judgment. This helps explain the conservative impulse at the heart of computation. Historically, the computer arrived “just in time”, Weizenbaum wrote. But in time for what? “In time to save – and save very nearly intact, indeed, to entrench and stabilise – social and political structures that otherwise might have been either radically renovated or allowed to totter under the demands that were sure to be made on them.” Computers became mainstream in the 1960s, growing deep roots within American institutions just as those institutions faced grave challenges on multiple fronts. The civil rights movement, the anti-war movement and the New Left are just a few of the channels through which the era’s anti-establishment energies found expression. Protesters frequently targeted information technology, not only because of its role in the Vietnam war but also due to its association with the imprisoning forces of capitalism. In 1970, activists at the University of Wisconsin destroyed a mainframe during a building occupation; the same year, protesters almost blew one up with napalm at New York University.
This was the atmosphere in which Computer Power and Human Reason appeared. Computation had become intensely politicised. There was still an open question as to the path that it should take. On one side stood those who “believe there are limits to what computers ought to be put to do,” Weizenbaum writes in the book’s introduction. On the other were those who “believe computers can, should, and will do everything” – the artificial intelligentsia.
M arx once described his work Capital as “the most terrible missile that has yet been hurled at the heads of the bourgeoisie”. Computer Power and Human Reason seemed to strike the artificial intelligentsia with similar force. McCarthy, the original AI guru, seethed: “Moralistic and incoherent”, a work of “new left sloganeering”, he wrote in a review. Benjamin Kuipers from MIT’s AI Lab – a PhD student of Minsky’s – complained of Weizenbaum’s “harsh and sometimes shrill accusations against the artificial intelligence research community”. Weizenbaum threw himself into the fray: he wrote a point-by-point reply to McCarthy’s review, which led to a response from the Yale AI scientist Roger C Schank – to which Weizenbaum also replied. He clearly relished the combat.
In the spring of 1977, the controversy spilled on to the front page of the New York Times. “Can machines think? Should they? The computer world is in the midst of a fundamental dispute over those questions,” wrote the journalist Lee Dembart. Weizenbaum gave an interview from his MIT office: “I have pronounced heresy and I am a heretic.” Computer Power and Human Reason caused such a stir because its author came from the world of computer science. But another factor was the besieged state of AI itself. By the mid-1970s, a combination of budget-tightening and mounting frustration within government circles about the field failing to live up to its hype had produced the first “AI winter”. Researchers now struggled to get funding. The elevated temperature of their response to Weizenbaum was likely due at least in part to the perception that he was kicking them when they were down.
AI wasn’t the only area of computation being critically reappraised in these years. Congress had been recently contemplating ways to regulate “electronic data processing” by governments and businesses in order to protect people’s privacy and to mitigate the potential harms of computerised decision-making. (The watered-down Privacy Act was passed in 1974.) Between radicals attacking computer centers on campus and Capitol Hill looking closely at data regulation, the first “techlash” had arrived. It was good timing for Weizenbaum.
Weizenbaum in Germany in 2005.
Computer Power and Human Reason gave him a national reputation. He was delighted. “Recognition was so important to him,” his daughter Miriam told me. As the “house pessimist of the MIT lab” (the Boston Globe), he became a go-to source for journalists writing about AI and computers, one who could always be relied upon for a memorable quote.
But the doubts and anxieties that had plagued him since childhood never left. “I remember him saying that he felt like a fraud,” Miriam told me. “He didn’t think he was as smart as people thought he was. He never felt like he was good enough.” As the excitement around the book died down, these feelings grew overwhelming. His daughter Pm told me that Weizenbaum attempted suicide in the early 1980s. He was hospitalised at one point; a psychiatrist diagnosed him with narcissistic personality disorder. The sharp swings between grandiosity and dejection took their toll on his loved ones. “He was a very damaged person and there was only so much he could absorb of love and family,” Pm said.
In 1988, he retired from MIT. “I think he ended up feeling pretty alienated,” Miriam told me. In the early 1990s, his second wife, Ruth, left him; in 1996, he returned to Berlin, the city he had fled 60 years earlier. “Once he moved back to Germany, he seemed much more content and engaged with life,” Pm said. He found life easier there. As his fame faded in the US, it increased in Germany. He became a popular speaker, filling lecture halls and giving interviews in German.
The later Weizenbaum was increasingly pessimistic about the future, much more so than he had been in the 1970s. Climate change terrified him. Still, he held out hope for the possibility of radical change. As he put it in a January 2008 article for Süddeutsche Zeitung: “ The belief that science and technology will save the Earth from the effects of climate breakdown is misleading. Nothing will save our children and grandchildren from an Earthly hell. Unless: we organise resistance against the greed of global capitalism.” Two months later, on 5 March 2008, Weizenbaum died of stomach cancer. He was 85.
B y the time Weizenbaum died, AI had a bad reputation. The term had become synonymous with failure. The ambitions of McCarthy, formulated at the height of the American century, were gradually extinguished in the subsequent decades. Getting computers to perform tasks associated with intelligence, like converting speech to text, or translating from one language to another, turned out to be much harder than anticipated.
Today, the situation looks rather different. We have software that can do speech recognition and language translation quite well. We also have software that can identify faces and describe the objects that appear in a photograph. This is the basis of the new AI boom that has taken place since Weizenbaum’s death. Its most recent iteration is centred on “generative AI” applications like ChatGPT, which can synthesise text, audio and images with increasing sophistication.
At a technical level, the set of techniques that we call AI are not the same ones that Weizenbaum had in mind when he commenced his critique of the field a half-century ago. Contemporary AI relies on “neural networks”, which is a data-processing architecture that is loosely inspired by the human brain. Neural networks had largely fallen out of fashion in AI circles by the time Computer Power and Human Reason came out, and would not undergo a serious revival until several years after Weizenbaum’s death.
But Weizenbaum was always less concerned by AI as a technology than by AI as an ideology – that is, in the belief that a computer can and should be made to do everything that a human being can do. This ideology is alive and well. It may even be stronger than it was in Weizenbaum’s day.
Certain of Weizenbaum’s nightmares have come true: so-called risk assessment instruments are being used by judges across the US to make crucial decisions about bail, sentencing, parole and probation, while AI-powered chatbots are routinely touted as an automated alternative to seeing a human therapist. The consequences may have been about as grotesque as he expected. According to reports earlier this year, a Belgian father of two killed himself after spending weeks talking with an AI avatar named … Eliza. The chat logs that his widow shared with the Brussels-based newspaper La Libre show Eliza actively encouraging the man to kill himself.
A humanoid robot interacting with visitors at the AI for Good summit in Geneva earlier this month.
On the other hand, Weizenbaum would probably be heartened to learn that AI’s potential for destructiveness is now a matter of immense concern. It preoccupies not only policymakers – the EU is finalising the world’s first comprehensive AI regulation, while the Biden administration has rolled out a number of initiatives around “responsible” AI – but AI practitioners themselves.
Broadly, there are two schools of thought today about the dangers of AI. The first – influenced by Weizenbaum – focuses on the risks that exist now. For instance, experts such as the linguist Emily M Bender draw attention to how large language models of the kind that sit beneath ChatGPT can echo regressive viewpoints, like racism and sexism, because they are trained on data drawn from the internet. Such models should be understood as a kind of “parrot”, she and her co-authors write in an influential 2021 paper , “haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine.” The second school of thought prefers to think in speculative terms. Its adherents are less interested in the harms that are already here than in the ones that may someday arise – in particular the “existential risk” of an AI that becomes “superintelligent” and wipes out the human race. Here the reigning metaphor is not a parrot but Skynet, the genocidal computer system from the Terminator films. This perspective enjoys the ardent support of several tech billionaires, including Elon Musk, who have financed a network of like-minded thinktanks, grants and scholarships. It has also attracted criticism from members of the first school, who observe that such doomsaying is useful for the industry because it diverts attention away from the real, current problems that its products are responsible for. If you “project everything into the far future,” notes Meredith Whittaker, you leave “the status quo untouched”.
Weizenbaum, ever attentive to the ways in which fantasies about computers can serve powerful interests, would probably agree. But there is nonetheless a thread of existential risk thinking that has some overlap with his own: the idea of AI as alien. “A superintelligent machine would be as alien to humans as human thought processes are to cockroaches,” argues the philosopher Nick Bostrom, while the writer Eliezer Yudkowsky likens advanced AI to “an entire alien civilisation”.
Weizenbaum would add the following caveat: AI is already alien, even without being “superintelligent”. Humans and computers belong to separate and incommensurable realms. There is no way of narrowing the distance between them, as the existential risk crowd hopes to do through “AI alignment”, a set of practices for “aligning” AI with human goals and values to prevent it from becoming Skynet. For Weizenbaum, we cannot humanise AI because AI is irreducibly non-human. What you can do, however, is not make computers do (or mean) too much. We should never “substitute a computer system for a human function that involves interpersonal respect, understanding and love”, he wrote in Computer Power and Human Reason. Living well with computers would mean putting them in their proper place: as aides to calculation, never judgment.
Weizenbaum never ruled out the possibility that intelligence could someday develop in a computer. But if it did, he told the writer Daniel Crevier in 1991, it would “be at least as different as the intelligence of a dolphin is to that of a human being”. There is a possible future hiding here that is neither an echo chamber filled with racist parrots nor the Hollywood dystopia of Skynet. It is a future in which we form a relationship with AI as we would with another species: awkwardly, across great distances, but with the potential for some rewarding moments. Dolphins would make bad judges and terrible shrinks. But they might make for interesting friends.
Follow the Long Read on Twitter at @gdnlongread , listen to our podcasts here and sign up to the long read weekly email here.
Explore more on these topics The long read Artificial intelligence (AI) Chatbots ChatGPT Vietnam war MIT - Massachusetts Institute of Technology Internet safety features Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
255 | 2,023 | "Artificial Intelligence - The Atlantic" | "https://www.theatlantic.com/category/ai-artificial-intelligence" | "Sign in My Account Subscribe Quick Links Dear Therapist Crossword Puzzle Manage Subscription Popular Latest Sections Politics Ideas Photo Science Culture Podcasts Health Education Planet Technology Family Projects America In Person Global Events Books Fiction Newsletter The Atlantic Crossword Play Crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Manage Subscription Popular Latest Sign In My Account Subscribe Artificial Intelligence Haiyun Jiang / The New York Times / … The Sudden Fall of Sam Altman What it means that the world’s most powerful AI executive is out of a job Ross Andersen November 17, 2023 Illustration by The Atlantic. Source: pabradyphoto / Getty.
The White House Is Preparing for an AI-Dominated Future President Biden’s big swing on AI is as impressive and confusing as the technology itself.
Karen Hao and Matteo Wong October 30, 2023 Illustration by The Atlantic. Source: Bing Image Creator. … AI Has a Hotness Problem In the world of generated imagery, you’re either drop-dead gorgeous or a wrinkled, bug-eyed freak.
Caroline Mimbs Nyce October 24, 2023 Jordan Speer for The Atlantic Computers Are Learning to Smell AI could revolutionize our understanding of one of the most mysterious human senses.
Matteo Wong October 13, 2023 Illustration by The Atlantic. Source: Getty.
The New AI Panic Washington and Beijing have been locked in a conflict over AI development. Now a new battle line is being drawn.
Karen Hao October 11, 2023 Illustration by Joanne Imperio / The Atlantic. Source: … AI’s Present Matters More Than Its Imagined Future Let’s not spend too much time daydreaming.
Inioluwa Deborah Raji October 4, 2023 Illustration by Jared Bartman / The Atlantic. Source: … Artists Are Losing the War Against AI OpenAI has introduced a tool for artists to keep their images from training future AI programs. It may not make a difference.
Matteo Wong October 2, 2023 Illustration by Joanne Imperio / The Atlantic My Books Were Used to Train Meta’s Generative AI. Good.
It can have my next one too.
Ian Bogost September 27, 2023 Illustration by Paul Spella / The Atlantic. Source: … A New Coca-Cola Flavor at the End of the World Y3000, the latest Coke flavor, was purportedly made with the assistance of AI. What does it taste like? Kaitlyn Tiffany September 26, 2023 Illustration by Ben Kothe / The Atlantic So Much for ‘Learn to Code’ In the age of AI, computer science is no longer the safe major.
Kelli María Korducki September 26, 2023 Video by The Atlantic. Source: Getty.
What I Found in a Database Meta Uses to Train Generative AI Nobel-winning authors, Dungeons and Dragons , Christian literature, and erotica all serve as datapoints for the machine.
Alex Reisner September 25, 2023 Illustration by Joanne Imperio / The Atlantic. Source: … These 183,000 Books Are Fueling the Biggest Fight in Publishing and Tech Use our new search tool to see which authors have been used to train the machines.
Alex Reisner September 25, 2023 Illustration by Jared Bartman / The Atlantic. Source: … Why Go With an Evil-Looking Orb? The controversial crypto project Worldcoin asks people to look into a shiny orb to have their irises scanned. It’s a bit on the nose.
Kaitlyn Tiffany September 8, 2023 Illustration by Joanne Imperio / The Atlantic Robots Are Already Killing People The AI boom only underscores a problem that has existed for years.
Bruce Schneier and Davi Ottenheimer September 6, 2023 H. Armstrong Roberts / ClassicStock / Getty High-School English Needed a Makeover Before ChatGPT I used to make my students write essay after essay. There was always a better way.
Daniel Herman August 30, 2023 Illustration by The Atlantic. Source: Getty.
It’s a Weird Time for Driverless Cars The robotaxis now hitting American streets are troubling and amazing all at once.
Caroline Mimbs Nyce August 29, 2023 Illustration by The Atlantic. Source: Getty.
The Internet’s Next Great Power Suck AI’s carbon emissions are about to be a problem.
Matteo Wong August 23, 2023 Illustration by The Atlantic. Source: Getty.
Revealed: The Authors Whose Pirated Books Are Powering Generative AI Stephen King, Zadie Smith, and Michael Pollan are among thousands of writers whose copyrighted works are being used to train large language models.
Alex Reisner August 19, 2023 Joanne Imperio / The Atlantic Before a Bot Steals Your Job, It Will Steal Your Name The future of AI looks a lot like Tessa, Ernie, and Amy.
Jacob Sweet August 11, 2023 Illustration by Ben Kothe / The Atlantic. Source: … Can an AI Save a Life? Michael spent years fighting isolation, depression, and despair. Then he met Sam. Then Sam changed.
Ethan Brooks and Hanna Rosin August 10, 2023 More Stories About Our History Careers Contact Help Center Contact Us Atlantic Brand Partners Press Podcasts The Experiment Floodlines How to Build a Happy Life The Review Subscription Purchase Give a Gift Manage Subscription Download iOS App Newsletters Follow Privacy Policy Do Not Sell My Personal Information Advertising Guidelines Terms Conditions Responsible Disclosure Site Map TheAtlantic.com Copyright (c) 2023 by The Atlantic Monthly Group. All Rights Reserved.
" |
259 | 2,000 | "The Coming Air Age - The Atlantic" | "https://www.theatlantic.com/magazine/archive/1942/09/the-coming-air-age/306248" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore How Serious Are the Comics? Lovell Thompson The Coming Air Age Igor Sikorsky (1) European Report Not in My Country Edward Weismiller "Do You Get Out Very Often?" Louise Dickinson Rich The Year of Decision 1846 Bernard Devoto A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Coming Air Age The time is 1955; the place a lovely meadow surrounded by deep woods on a hilltop overlooking a beautiful lake in the Catskill Mountains 120 miles from New York. It is quarter past eight in the morning, and you are about to commute to your office in the city. Yet there is no paved highway nearer than fifteen miles, and it is fifty to a railroad station.
Now you hear a low hum, and over the horizon appears a flying machine. You press the button of a box near by and a radio signal flashes to the machine. The aircraft, looking oddly like a horizontal electric fan, drones toward you. When the pilot is directly overhead, all forward movement of the machine ceases and it descends vertically until the cabin door is within a foot of the ground.
On the machine's gray side is painted Helicopter Express to New York.
As you make ready to enter, the direct-lift machine does not touch the ground; it poises motionless under its whirling rotor blades like a gigantic hummingbird. The door opens and you step inside; you nod a greeting to the co-pilot who takes your commutation ticket, you wave to those of the other fifteen passengers you know. The door closes and the helicopter immediately ascends vertically to 1000 feet. Now it darts ahead, quickly attaining a forward speed of 140 miles an hour.
The co-pilot says conversationally, "How do you like your new home? Good, eh? Popular spot here. So many people have moved into these mountains that we've had to put on an extra bus to carry them." Fifty minutes later the helicopter bus hovers over a midtown New York building, descends slowly to alight on a roof space some sixty yards square. You go into the building, take the elevator to the street below, and walk half a block to your office. Not quite an hour has elapsed since you drank your morning coffee in your home. Des this sound like a fantasy imagined by Jules Verne? If so, I can assure you, as a practical aeronautical engineer, that such a trip is neither fantastic nor impractical. Any of us who are alive ten years after this Second World War is won will see and use hundreds of short-run helicopter bus services. We shall see hundreds of thousands of privately owned direct-lift machines carrying Americans about their business and their pleasures.
In forecasting this aviation development I am not drawing upon any imagination, nor am I depending upon the future invention of a direct-lift machine. A practical helicopter that can do everything I have just described is at this instant within a hundred yards of me. Less than an hour ago this craft was hovering motionless ten feet off the ground while a man climbed to the cabin by a rope ladder. With a pointed stick on the nose of our helicopter, it was possible to spear a wooden ring twelve inches in diameter fastened to a pole only four feet from the ground. The helicopter could be backed, turned, and stopped motionless in the air right in front of a man who plucked the ring off the helicopter's nose. In April 1941, the VS-300, piloted by its designer, exceeded the record of endurance for this type of craft by remaining in the air for one hour, thirty-two minutes. The novelty of this record flight was that the ship hovered during the entire period over one spot less than half an acre in area. Since that time considerable further progress has been achieved with this project.
But for the fact that the helicopter is now a war weapon—which means that all improvements must be shrouded in military secrecy—I could describe additional details which would show why I am convinced that a helicopter bus service, for instance, is not only practicable but, in fact, inevitable. Had the Second World War not turned all our thoughts to instruments of destruction, I do not believe you would have to wait another decade to see hundreds of thousands of helicopters in daily use.
So I must be content with picturing for you this coming air age as I believe it will be. The first question, naturally, is why mass flying should need to wait for the direct-lift machine—particularly because, in the past twenty-five years, many prophets have forecast air-minded millions taking to the sky in air-flivvers, the foolproof plane that anyone could pilot. Why did these prophecies fail of realization? The answer, I believe, lies in the fact that as airplanes developed in size and range, the speed necessary for landing and take-off also increased. Indeed, airports grew so enormous in size that they had to be moved miles away from centers of population. Today, if you wish to take a journey of 200 miles, you spend thirty minutes riding to the airport, one hour traveling 200 miles, and thirty more minutes getting from the terminal airport to your destination. Thus the airplane came to be the slave of the airport and, like the railroad, became of value mainly as a medium of travel between established public terminals.
Another drawback to mass use of the airplane is that the speed of landing and takeoff usually exceeds the speed we are accustomed to in the automobile. Landing and taking off an airplane at such speeds demands good reflexes, quick decisions, and immediate action, particularly under unfavorable weather conditions. Hence the millions of Americans in middle life who can afford an airplane fail to buy one because they believe the machine takes more skill than they possess; and also because the airplane as a rule cannot carry them directly from home to office.
Yet these are the millions whose purchases made possible the fabulous automobile era. These are the millions who must be sold on the simplicity and safety of flying if we are to have—as we shall have—the era of aviation.
I first experimented with the helicopter as early as 1908. Now, once again, more than thirty years later, I have turned to a study of the direct-lift machine as the only aircraft that could take the speed out of landing and take-off, eliminate the necessity of runways, and hence bring to flying the door-to-door flexibility of the automobile.
The direct-lift machine, as finally made practical, has characteristics possessed by no other means of conveyance. It can hover, ascend and descend vertically at any speed you choose; it can stop, back up, go sideways with no forward motion. It is simple to operate and service. These are, I am certain, the essential attributes of an aircraft that can be used by hundreds of thousands of men and women, old and young.
Once again let us peer briefly at 1955 and see how your wife handles a typical family helicopter as she flies fifty miles to spend an hour with a friend. She opens the doors of the helicopter hangar that is only slightly larger and higher than your old two-car garage. She pushes the starter, the motor purrs. Seated in the two-place cabin, she presses a clutch that applies the engine power to the wheels. For this is a roadable model; she does not have to push or pull it to the lawn. The helicopter drives itself out of its garage to a suitable space near your badminton court. Here she disengages the wheel-clutch and applies the power to the overhead rotor blades. Your wife is now ready to ascend. How does she accomplish this? To explain, let me describe the controls. Directly before her is a knobbed control stick, reminiscent of the gearshift on the earlier automobiles. On her left is another lever like the familiar emergency brake. Comfortable to her feet are two pedals resembling the clutch and brake of a car. There is a throttle quadrant near her hand, and among the instruments on the panel before her is a tachometer to count the number of revolutions a minute made by the rotor blades.
Now she opens the throttle. The engine, well muffled, picks up speed until the tachometer tells her the rotor blades are whirling 240 revolutions a minute—or the equivalent of 275 miles ail hour at the tip of the blade. The rotors must whirl at this rate before she applies the lift.
Now her hand pulls gently on the left-hand lift lever as if she were applying an emergency brake; only, in this instance, her pull changes the pitch of the rotor blades so that they bite more deeply, more powerfully into the air. The machine becomes light, quivers with eagerness to be off.
Another fraction of an inch pull on the lift lever and gently, smoothly, the helicopter begins to ascend straight up. She controls the rate of rise by increasing or lessening the rotor blade pitch by the lift lever. She permits the machine to ascend to 1200 feet. Now she pushes the center control stick forward. She is tilting the rotor blades—and the machine, too, slightly—so that they bite the air in a forward motion. The helicopter gets under way. From now on, all the helicopter's movements, save rise and descent, are controlled by the center stick. If she wishes to go forward more swiftly, she pushes the stick away from her. If she wishes to stop and hover, she leaves it in center—in the neutral of an automobile gearshift. If she wants to back up, she pulls it toward her; and she presses it right or left if she wishes to make a turn in those directions—or to go sideways with no forward movement.
Now she makes a last adjustment on the lift lever—a helicopter has a slight tendency to rise as it attains forward speed and she must adjust the rotor pitch to it. She turns the machine to the left to pick up the plainly marked air route to her friend's home. She is cruising comfortably at 120 miles an hour. In thirty minutes she sights her friend's house. Firmly she pulls back on the control stick, which slows down the helicopter to a stop 1200 feet above green lawn. She hovers, prepares to descend.
How does she do this? She sits with her right hand lightly on the control stick, her left gripping the lift lever. With the control stick she holds the helicopter motionless, against a light breeze, pressing forward, back, right or left as the case may be—just as she would jockey a motorcar into a parking space.
Gradually she releases the lift lever. As the rotor blades bite less powerfully at the air, the helicopter sinks gently to earth. She can control the descent to one foot a minute if she chooses. The wheels touch the ground, the shock absorbers lower the cabin without a jar. She turns off the ignition switch and climbs briskly out.
Does this appear complicated? If so, it is only because that which we have never experienced always seems complex. Actually, the operation is most simple. There are fewer control motions than in handling an automobile, and there is no need for the simultaneous actions of throwing a gearshift, applying the foot throttle, letting in the clutch, and steering a careful course, which make the control of an automobile at first so confusing. Nor is there the immediate speed of motorcar and airplane to tense the nerves. And once the helicopter take-off has been made and altitude achieved, the boundless spaces of the sky offer an uncrowded highway that leads anywhere without constant vigilance.
Because we are accustomed to them, the hazards and complications of driving an automobile are rarely realized. Habit makes us accept the swift car that speeds past us with only inches to spare; the skiddy road surface; the traffic jams; the car that suddenly darts from a side road into our path; the peril of a driving mistake that must be instantly corrected to avoid disaster. But that there is nerve strain is shown by the quick irritability of any two motorists arguing about a minor mishap, or failure of one to operate his car as was expected by the other.
I believe that if chance had produced the helicopter for general use before the automobile was invented, people would recoil in dismay at the hazards of a Sunday drive on a modern highway in what would be, to them, a newfangled dangerous contraption.
And the ability of the helicopter to hover and ascend and descend vertically gives the helicopter this advantage over the, airplane: the pilot does not have to gauge height and distance and rate of speed in gliding into an airport. Nor must the trees, telephone poles, and houses near an airport be sharply measured mentally to clear them in a take-off. A helicopter needs only slightly more than the diameter of its rotor blade circle to rise and descend.
But, you may ask, what happens to your wife and your helicopter if the engine should suddenly stop in mid-air? Certainly, without its power she must descend. What will happen to her? If the engine fails, a clutch automatically disengages the engine from the rotor blades. These continue to spin by the air pressure. All other controls remain normal, and those spinning rotor blades enable the craft to descend safely from any altitude. But your wife, as she would if she had a tire puncture, looks for a place to stop. On her left is a small meadow. She thrusts the control stick forward and to the left, and the helicopter angles downward in that direction. As the ground approaches she pulls the stick back to check the forward movement. The helicopter lands with a slight forward speed, and may coast ten or twelve feet.
These actions of your wife are as simple in their way as handling a motorcar. Indeed, perhaps simpler. Any moving vehicle needs to be controlled, but the helicopter will go automatically into the normal gliding position when the engine stops; and your wife has only to pick out a suitable place to land. Even if she makes a faulty movement of the controls at the contact with the ground, this would involve, as a rule, only damage to the machine and not to the occupant.
Now, you may ask, what must I pay for my helicopter? Fortunately, the direct-lift machine is ideally adapted for mass production. Manufactured by hundreds of thousands, it will cost about as much as a medium-priced automobile. Because of the principle involved, the average medium-priced helicopter will probably not exceed the speed of 140 miles an hour. Twenty persons will probably be as many passengers as can be carried. Made entirely of metal, and having few working parts, the helicopter lends itself to assembly-line manufacture as easily as did the automobile.
Nor will the helicopter cost much to maintain. One of the drawbacks to the greater use of small private airplanes has been hangar rental at an airport. The direct-lift machine needs no airport; there is no hangar rental because it is housed in a garage on your own grounds.
A light two-seater helicopter can make ten miles to a gallon of gasoline. Time may better this figure. And the cost of servicing will be no more, certainly—and perhaps even less—than for your automobile. A helicopter operates with uniform rhythm. Whether you are flying at three miles an hour or 140, the rotor blades are spinning at a nearly constant speed. An automobile with its frequently shifting rates of speed and greater number of parts suffers from greater wear. An automobile is serviced, theoretically at least, every thousand miles. A helicopter will get a similar servicing approximately every hundred hours, which would mean about 5000 to 9000 miles. Finally, let me add that dust, the enemy of machinery, is rarely found in the clean air of the heights.
Learning to fly a helicopter will be no more difficult than learning to drive an automobile. The time necessary will vary with the individual, but probably twelve to twenty hours of instruction will be ample for the normal person. And the actual teaching operation will be much simpler than with either the motorcar or the airplane.
Suppose, for instance, you decide to buy a two-place helicopter. The cost of teaching is included in the sales price, and you go to the dealer to be taught to operate the machine before taking delivery. He has a demonstrator in a suitable space. You both get in the cabin, and he explains the controls much as I have set them forth here. Now he presses the starter; the engine comes to life.
"Try it," he suggests. "Get the feel of it." You speed up the rotor blades, you pull the left lift lever, but you do not rise, as you expect, to a disconcerting height; instead, a cable attached to the helicopter holds it some four feet above the ground, permitting you safely and easily to study the control movements. How simple this method of accustoming yourself to flying a helicopter! And I am certain that flying a direct-lift machine will become, in time, just as much an automatic habit as driving your motorcar is now. I envision helicopters, attached to the earth by cables, at hundreds of county fairs; thus, thousands of men, women, and children will operate the controls, safely enjoy the thrill of flying, and become air-minded.
A question certain to trouble you is this: With hundreds of thousands, perhaps millions, of helicopters flying in all directions at once, what about sky congestion and air traffic problems? This problem has been foreseen and already a certain amount of planning has been done. While air traffic problems will not be at all comparable to what we now have with the motorcar, there must certainly be one-way air lanes within the limits and in the neighborhood of big centers of population. There will be "slow" and "fast" altitudes and you will choose the one that suits your temperament. Naturally, all helicopter highways will be at a safe distance from the airplane levels.
All helicopters, of course, will remain at a reasonable altitude over thickly populated centers. But there need be no such "flight plan " as airplanes now must often submit to before undertaking a long journey. Helicopter owners will fly at will, bound only by their common sense and some general traffic rules which are easily obeyed in the vast reaches of the sky.
Nor will the strict physical examination that now might prohibit many thousands from flying an airplane be necessary. A person who can drive an automobile can fly a helicopter; and a man or woman with middle-aged reflexes is just as safe in one as in the other because the helicopter, as a rule, is always moving slowly when close to the ground. The helicopter owner will have to pass no stricter examination than is—or should be—necessary for driving a motorcar. He should not be color-blind, his vision should be normal with or without glasses. A man or woman with a heart ailment should not drive a helicopter—nor an automobile.
You or your wife will have to pass a driving examination for a helicopter just as you must for an automobile. Then you can obtain insurance on your direct-lift machine as you do now on your car.
One more question will doubtless trouble you: What about the helicopter and the weather? Rain or snow, fog or wind? What happens to the helicopter and its average owner then? Man has always been limited by weather, even when he moved only on foot or horse. Common sense tells us that no one will stir abroad in bad weather with a helicopter or any other conveyance unless necessity compels. Yet if bad weather surprises you, the helicopter possesses advantages that no other vehicle can claim. If you are caught in mid-air by fog, you may slow down to five or fifteen miles an hour, cautiously descend and pick your way to your destination or to some place where you can wait until conditions improve. A heavy snowfall will immobilize airplane and motorcar until airport runways and roads have been cleared. But a helicopter, rising directly from the snow, is not stormbound and may go anywhere. A physician hastily summoned on an emergency call before the roads are cleared can descend by helicopter at his patient's door. This is not speculative; our helicopters have been flown in rain and fog and wind to test these characteristics.
The helicopter easily reaches what were hitherto inaccessible regions. Some time ago I received a letter from a man who owns a mine with valuable ore deposits. But the mine is at the bottom of a canyon whose walls are 2000 feet of sheer drop. It is extremely difficult for him to get supplies down and the ore out. The direct-lift machine, of course, when we begin making it for peace instead of for war, will open an easy air highway to his mine.
Though I may not touch upon the uses of the helicopter in wartime, its ability to hover and lower a rope, for example, to help an exhausted swimmer climb up to safety should give some indication of its utility. The power of a helicopter to hide behind trees and in valleys and behind hills and also to skim swiftly over the most formidable land obstacles suggests its value on a field of battle.
In contrast to a war plane, which is useless in peace, a military helicopter is 75 per cent adaptable to commercial use. The improvements we make today will be used by you in the air age of tomorrow. I am convinced that the manufacture, sale, and upkeep of the direct-lift machine will become a billion-dollar industry within ten years after this war, just as the automobile industry grew colossally after the last. There will spring up associated industries, and a new prosperity. There will be many startling changes in our way of life.
What will some of these changes be? Most important, I think, is that hundreds of thousands of people can return to the health and beauty of the countryside. Suburban development has hitherto been limited by the range of the bus, the automobile, and the commuter's train. This has put a high price on real estate adjacent to railroad or highway—prices beyond the reach of the low-income groups. But because of the helicopter, millions of acres of hitherto inaccessible land will be developed with small homes for medium- or low-income groups. A cheap, swift helicopter bus service will ferry these people to and from their work. Suburbs will include ten thousand or more square miles. Real-estate values will come within the reach of average incomes, and the people will literally return to the good earth.
I envision a new type of architecture—perhaps a house with a flat roof and a pleasantly designed helicopter hangar to one side of it, so that you have only to wheel the machine a few feet to take off. Hotels in beautiful surroundings will provide landing and hangar space for touring. Now, a day's tour of 400 miles in a motorcar is considered a great accomplishment. An air voyage of 1000 miles in a helicopter will not be unusual or fatiguing.
Long-distance transportation of passengers and freight over land and sea will definitely remain the job for the large airplane, which can carry out such flights with greater speed and efficiency. Therefore the long flights across the continent, as well as the air travel to Europe, South America, or other remote corners of the world, belong definitely to the airliner. But the short haul of less than 1000 miles is equally the task of the helicopter, which can do it with the greatest efficiency.
Express and air mail will be carried from the airports to final destination by helicopter. There will be a direct-lift machine service to take airliner passengers from the airport to the city in a few minutes. There will be special delivery of perishable food to your door.
By the use of a helicopter shuttle service, oranges that were yesterday on the trees in Florida and California will be today moved to the big air-freight terminals and dropped off there. They will then reach your grocer's the next day by the freight helicopter's connecting lines to small centers of population—and from your grocer's will come to your door by his helicopter delivery service.
The winter growth of fresh vegetables such as beans and tomatoes, celery and lettuce, in the warm South and the Far West has been hitherto restricted because of cost and time of transportation to market. The airline and helicopter freight service will speed such healthy foods to the ends of the nation. Hence our eating habits will change perhaps more than we realize. Strawberries in January, as it were, available for all.
Private and bus helicopters will make possible vacations at seashore or mountain for countless thousands. The helicopter will destroy space for millions of people. Nothing, I believe, is more delightful than touring in a helicopter. To hover and fill one's eyes with an enchanting vista is to bring joy to the soul. So, while he who must hurry will speed to other continents and across oceans by airliner, the man who has time may tour in his helicopter distances now impossible to the motorcar.
South America will become a continent easily accessible for such tourists. Shall we not, then, see a hemispheric unity based on the understanding of thousands who will see much of South America through the glass of their helicopter cabin? I think so, for jungles hold no terrors for the helicopter. A small clearing suffices; if a helicopter settles down on a jungle forest, the machine may be irretrievable, but the passengers calmly descend to earth by a rope ladder. Equipped with floats instead of wheels, it can rise from your door and, if necessary, land in swamp or lake, river or savannah. The vast and beautiful Canadian north country with its thousands of gem-like lakes will be visited by helicopter tourists who will look upon breathless scenes never before, perhaps, seen by eye of man. Yes, we Americans, with our eager curiosity and desire to travel, will bind together North and South America by helicopter; and what will come of that, no man may now even hazard a guess.
But since he who can, will seek the cool Arctic in the summer and the warm and beautiful southern countries in the winter, there will be gas stations on the Canadian and Alaskan tundras—and hotels, too—and skilled mechanics in Point Barrow or Belize to check your helicopter.
In the American democracy are bred the daring, imaginative people who will know how to make use of the breath-taking possibilities of the helicopter. And when they do—within a decade after the war—we shall enter the new air age in which the helicopter will contribute toward the greatest prosperity our people and our country and the world have ever known.
* Editor's note—Igor Sikorsky, the aero-engineer, was born in Kiev, Russia, in 1889, and became an American citizen in his fortieth year. He designed and built flying machines on his own account from 1908 to 1911, and even in these pioneer days his thoughts were gravitating towards the helicopter. In the present war he is perhaps best known for the Sikorsky multimotored amphibian plane. But the story which he told in detail to Frederick C. Painton is an amazing promise of what flying might be when at last the fighting is over.
" |
261 | 2,016 | "Inside the AI Factory: the humans that make tech seem human - The Verge" | "https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Artificial Intelligence AI Is a Lot of Work As the technology becomes ubiquitous, a vast tasker underclass is emerging — and not going anywhere.
By Josh Dzieza , an investigations editor covering tech, business, and climate change. Since joining The Verge in 2014, he’s won a Loeb Award for feature writing, among others.
Illustrations by Richard Parry for The Verge Jun 20, 2023, 12:05 PM UTC | Comments Share this story This article is a collaboration between New York Magazine and The Verge.
A few months after graduating from college in Nairobi, a 30-year-old I’ll call Joe got a job as an annotator — the tedious work of processing the raw information used to train artificial intelligence. AI learns by finding patterns in enormous quantities of data, but first that data has to be sorted and tagged by people, a vast workforce mostly hidden behind the machines. In Joe’s case, he was labeling footage for self-driving cars — identifying every vehicle, pedestrian, cyclist, anything a driver needs to be aware of — frame by frame and from every possible camera angle. It’s difficult and repetitive work. A several-second blip of footage took eight hours to annotate, for which Joe was paid about $10.
Then, in 2019, an opportunity arose: Joe could make four times as much running an annotation boot camp for a new company that was hungry for labelers. Every two weeks, 50 new recruits would file into an office building in Nairobi to begin their apprenticeships. There seemed to be limitless demand for the work. They would be asked to categorize clothing seen in mirror selfies, look through the eyes of robot vacuum cleaners to determine which rooms they were in, and draw squares around lidar scans of motorcycles. Over half of Joe’s students usually dropped out before the boot camp was finished. “Some people don’t know how to stay in one place for long,” he explained with gracious understatement. Also, he acknowledged, “it is very boring.” This article is a collaboration between New York Magazine and The Verge.
But it was a job in a place where jobs were scarce, and Joe turned out hundreds of graduates. After boot camp, they went home to work alone in their bedrooms and kitchens, forbidden from telling anyone what they were working on, which wasn’t really a problem because they rarely knew themselves. Labeling objects for self-driving cars was obvious, but what about categorizing whether snippets of distorted dialogue were spoken by a robot or a human? Uploading photos of yourself staring into a webcam with a blank expression, then with a grin, then wearing a motorcycle helmet? Each project was such a small component of some larger process that it was difficult to say what they were actually training AI to do. Nor did the names of the projects offer any clues: Crab Generation, Whale Segment, Woodland Gyro, and Pillbox Bratwurst. They were non sequitur code names for non sequitur work.
As for the company employing them, most knew it only as Remotasks, a website offering work to anyone fluent in English. Like most of the annotators I spoke with, Joe was unaware until I told him that Remotasks is the worker-facing subsidiary of a company called Scale AI, a multibillion-dollar Silicon Valley data vendor that counts OpenAI and the U.S. military among its customers. Neither Remotasks’ or Scale’s website mentions the other.
Much of the public response to language models like OpenAI’s ChatGPT has focused on all the jobs they appear poised to automate. But behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused. Only the companies that can afford to buy this data can compete, and those that get it are highly motivated to keep it secret. The result is that, with few exceptions, little is known about the information shaping these systems’ behavior, and even less is known about the people doing the shaping.
For Joe’s students, it was work stripped of all its normal trappings: a schedule, colleagues, knowledge of what they were working on or whom they were working for. In fact, they rarely called it work at all — just “tasking.” They were taskers.
The anthropologist David Graeber defines “bullshit jobs” as employment without meaning or purpose, work that should be automated but for reasons of bureaucracy or status or inertia is not. These AI jobs are their bizarro twin: work that people want to automate, and often think is already automated, yet still requires a human stand-in. The jobs have a purpose; it’s just that workers often have no idea what it is.
The current AI boom — the convincingly human-sounding chatbots, the artwork that can be generated from simple prompts, and the multibillion-dollar valuations of the companies behind these technologies — began with an unprecedented feat of tedious and repetitive labor.
In 2007, the AI researcher Fei-Fei Li, then a professor at Princeton, suspected the key to improving image-recognition neural networks, a method of machine learning that had been languishing for years, was training on more data — millions of labeled images rather than tens of thousands. The problem was that it would take decades and millions of dollars for her team of undergrads to label that many photos.
Li found thousands of workers on Mechanical Turk, Amazon’s crowdsourcing platform where people around the world complete small tasks for cheap. The resulting annotated dataset, called ImageNet, enabled breakthroughs in machine learning that revitalized the field and ushered in a decade of progress.
Annotation remains a foundational part of making AI, but there is often a sense among engineers that it’s a passing, inconvenient prerequisite to the more glamorous work of building models. You collect as much labeled data as you can get as cheaply as possible to train your model, and if it works, at least in theory, you no longer need the annotators. But annotation is never really finished. Machine-learning systems are what researchers call “brittle,” prone to fail when encountering something that isn’t well represented in their training data. These failures, called “edge cases,” can have serious consequences. In 2018, an Uber self-driving test car killed a woman because, though it was programmed to avoid cyclists and pedestrians, it didn’t know what to make of someone walking a bike across the street. The more AI systems are put out into the world to dispense legal advice and medical help, the more edge cases they will encounter and the more humans will be needed to sort them. Already, this has given rise to a global industry staffed by people like Joe who use their uniquely human faculties to help the machines.
Is that a red shirt with white stripes or a white shirt with red stripes? Is a wicker bowl a “decorative bowl” if it’s full of apples? What color is leopard print? Over the past six months, I spoke with more than two dozen annotators from around the world, and while many of them were training cutting-edge chatbots, just as many were doing the mundane manual labor required to keep AI running. There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators don’t get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors.
“There’s an entire supply chain,” said Sonam Jindal, the program and research lead of the nonprofit Partnership on AI. “The general perception in the industry is that this work isn’t a critical part of development and isn’t going to be needed for long. All the excitement is around building artificial intelligence, and once we build that, it won’t be needed anymore, so why think about it? But it’s infrastructure for AI. Human intelligence is the basis of artificial intelligence, and we need to be valuing these as real jobs in the AI economy that are going to be here for a while.” The data vendors behind familiar names like OpenAI, Google, and Microsoft come in different forms. There are private outsourcing companies with call-center-like offices, such as the Kenya- and Nepal-based CloudFactory, where Joe annotated for $1.20 an hour before switching to Remotasks. There are also “crowdworking” sites like Mechanical Turk and Clickworker where anyone can sign up to perform tasks. In the middle are services like Scale AI. Anyone can sign up, but everyone has to pass qualification exams and training courses and undergo performance monitoring. Annotation is big business. Scale, founded in 2016 by then-19-year-old Alexandr Wang, was valued in 2021 at $7.3 billion, making him what Forbes called “the youngest self-made billionaire,” though the magazine noted in a recent profile that his stake has fallen on secondary markets since then.
This tangled supply chain is deliberately hard to map. According to people in the industry, the companies buying the data demand strict confidentiality. (This is the reason Scale cited to explain why Remotasks has a different name.) Annotation reveals too much about the systems being developed, and the huge number of workers required makes leaks difficult to prevent. Annotators are warned repeatedly not to tell anyone about their jobs, not even their friends and co-workers, but corporate aliases, project code names, and, crucially, the extreme division of labor ensure they don’t have enough information about them to talk even if they wanted to. (Most workers requested pseudonyms for fear of being booted from the platforms.) Consequently, there are no granular estimates of the number of people who work in annotation, but it is a lot, and it is growing. A recent Google Research paper gave an order-of-magnitude figure of “millions” with the potential to become “billions.” Automation often unfolds in unexpected ways. Erik Duhaime, CEO of medical-data-annotation company Centaur Labs, recalled how, several years ago, prominent machine-learning engineers were predicting AI would make the job of radiologist obsolete. When that didn’t happen, conventional wisdom shifted to radiologists using AI as a tool. Neither of those is quite what he sees occurring. AI is very good at specific tasks, Duhaime said, and that leads work to be broken up and distributed across a system of specialized algorithms and to equally specialized humans. An AI system might be capable of spotting cancer, he said, giving a hypothetical example, but only in a certain type of imagery from a certain type of machine; so now, you need a human to check that the AI is being fed the right type of data and maybe another human who checks its work before passing it to another AI that writes a report, which goes to another human, and so on. “AI doesn’t replace work,” he said. “But it does change how work is organized.” You might miss this if you believe AI is a brilliant, thinking machine. But if you pull back the curtain even a little, it looks more familiar, the latest iteration of a particularly Silicon Valley division of labor, in which the futuristic gleam of new technologies hides a sprawling manufacturing apparatus and the people who make it run. Duhaime reached back farther for a comparison, a digital version of the transition from craftsmen to industrial manufacturing: coherent processes broken into tasks and arrayed along assembly lines with some steps done by machines and some by humans but none resembling what came before.
Worries about AI-driven disruption are often countered with the argument that AI automates tasks, not jobs, and that these tasks will be the dull ones, leaving people to pursue more fulfilling and human work. But just as likely, the rise of AI will look like past labor-saving technologies, maybe like the telephone or typewriter, which vanquished the drudgery of message delivering and handwriting but generated so much new correspondence, commerce, and paperwork that new offices staffed by new types of workers — clerks, accountants, typists — were required to manage it. When AI comes for your job, you may not lose it, but it might become more alien, more isolating, more tedious.
Earlier this year, I signed up for Scale AI’s Remotasks. The process was straightforward. After entering my computer specs, internet speed, and some basic contact information, I found myself in the “training center.” To access a paying task, I first had to complete an associated (unpaid) intro course.
The training center displayed a range of courses with inscrutable names like Glue Swimsuit and Poster Macadamia. I clicked on something called GFD Chunking, which revealed itself to be labeling clothing in social-media photos.
The instructions, however, were odd. For one, they basically consisted of the same direction reiterated in the idiosyncratically colored and capitalized typography of a collaged bomb threat.
“DO LABEL items that are real and can be worn by humans or are intended to be worn by real people,” it read.
“All items below SHOULD be labeled because they are real and can be worn by real-life humans,” it reiterated above photos of an Air Jordans ad, someone in a Kylo Ren helmet, and mannequins in dresses, over which was a lime-green box explaining, once again, “DO Label real items that can be worn by real people.” I skimmed to the bottom of the manual, where the instructor had written in the large bright-red font equivalent of grabbing someone by the shoulders and shaking them, “THE FOLLOWING ITEMS SHOULD NOT BE LABELED because a human could not actually put wear any of these items!” above a photo of C-3PO, Princess Jasmine from Aladdin, and a cartoon shoe with eyeballs.
Feeling confident in my ability to distinguish between real clothes that can be worn by real people and not-real clothes that cannot, I proceeded to the test. Right away, it threw an ontological curveball: a picture of a magazine depicting photos of women in dresses. Is a photograph of clothing real clothing? No, I thought, because a human cannot wear a photograph of clothing.
Wrong! As far as AI is concerned, photos of real clothes are real clothes. Next came a photo of a woman in a dimly lit bedroom taking a selfie before a full-length mirror. The blouse and shorts she’s wearing are real. What about their reflection? Also real! Reflections of real clothes are also real clothes.
After an embarrassing amount of trial and error, I made it to the actual work, only to make the horrifying discovery that the instructions I’d been struggling to follow had been updated and clarified so many times that they were now a full 43 printed pages of directives: Do NOT label open suitcases full of clothes; DO label shoes but do NOT label flippers; DO label leggings but do NOT label tights; do NOT label towels even if someone is wearing it; label costumes but do NOT label armor. And so on.
There has been general instruction disarray across the industry, according to Milagros Miceli, a researcher at the Weizenbaum Institute in Germany who studies data work. It is in part a product of the way machine-learning systems learn. Where a human would get the concept of “shirt” with a few examples, machine-learning programs need thousands, and they need to be categorized with perfect consistency yet varied enough (polo shirts, shirts being worn outdoors, shirts hanging on a rack) that the very literal system can handle the diversity of the real world. “Imagine simplifying complex realities into something that is readable for a machine that is totally dumb,” she said.
Once, Victor stayed up 36 hours straight labeling elbows and knees and heads in photographs of crowds — he has no idea why.
The act of simplifying reality for a machine results in a great deal of complexity for the human. Instruction writers must come up with rules that will get humans to categorize the world with perfect consistency. To do so, they often create categories no human would use. A human asked to tag all the shirts in a photo probably wouldn’t tag the reflection of a shirt in a mirror because they would know it is a reflection and not real. But to the AI, which has no understanding of the world, it’s all just pixels and the two are perfectly identical. Fed a dataset with some shirts labeled and other (reflected) shirts unlabeled, the model won’t work. So the engineer goes back to the vendor with an update: DO label reflections of shirts. Soon, you have a 43-page guide descending into red all-caps.
“When you start off, the rules are relatively simple,” said a former Scale employee who requested anonymity because of an NDA. “Then they get back a thousand images and then they’re like, Wait a second, and then you have multiple engineers and they start to argue with each other. It’s very much a human thing.” The job of the annotator often involves putting human understanding aside and following instructions very, very literally — to think, as one annotator said, like a robot. It’s a strange mental space to inhabit, doing your best to follow nonsensical but rigorous rules, like taking a standardized test while on hallucinogens. Annotators invariably end up confronted with confounding questions like, Is that a red shirt with white stripes or a white shirt with red stripes? Is a wicker bowl a “decorative bowl” if it’s full of apples? What color is leopard print? When instructors said to label traffic-control directors, did they also mean to label traffic-control directors eating lunch on the sidewalk? Every question must be answered, and a wrong guess could get you banned and booted to a new, totally different task with its own baffling rules.
Most of the work on Remotasks is paid at a piece rate with a single task earning anywhere from a few cents to several dollars. Because tasks can take seconds or hours, wages are hard to predict. When Remotasks first arrived in Kenya, annotators said it paid relatively well — averaging about $5 to $10 per hour depending on the task — but the amount fell as time went on.
Scale AI spokesperson Anna Franko said that the company’s economists analyze the specifics of a project, the skills required, the regional cost of living, and other factors “to ensure fair and competitive compensation.” Former Scale employees also said pay is determined through a surge-pricing-like mechanism that adjusts for how many annotators are available and how quickly the data is needed.
According to workers I spoke with and job listings, U.S.-based Remotasks annotators generally earn between $10 and $25 per hour, though some subject-matter experts can make more. By the beginning of this year, pay for the Kenyan annotators I spoke with had dropped to between $1 and $3 per hour.
That is, when they were making any money at all. The most common complaint about Remotasks work is its variability; it’s steady enough to be a full-time job for long stretches but too unpredictable to rely on. Annotators spend hours reading instructions and completing unpaid trainings only to do a dozen tasks and then have the project end. There might be nothing new for days, then, without warning, a totally different task appears and could last anywhere from a few hours to weeks. Any task could be their last, and they never know when the next one will come.
This boom-and-bust cycle results from the cadence of AI development, according to engineers and data vendors. Training a large model requires an enormous amount of annotation followed by more iterative updates, and engineers want it all as fast as possible so they can hit their target launch date. There may be monthslong demand for thousands of annotators, then for only a few hundred, then for a dozen specialists of a certain type, and then thousands again. “The question is, Who bears the cost for these fluctuations?” said Jindal of Partnership on AI. “Because right now, it’s the workers.” “I really am wasting my life here if I made somebody a billionaire and I’m earning a couple of bucks a week.” To succeed, annotators work together. When I told Victor, who started working for Remotasks while at university in Nairobi, about my struggles with the traffic-control-directors task, he told me everyone knew to stay away from that one: too tricky, bad pay, not worth it. Like a lot of annotators, Victor uses unofficial WhatsApp groups to spread the word when a good task drops. When he figures out a new one, he starts impromptu Google Meets to show others how it’s done. Anyone can join and work together for a time, sharing tips. “It’s a culture we have developed of helping each other because we know when on your own, you can’t know all the tricks,” he said.
Because work appears and vanishes without warning, taskers always need to be on alert. Victor has found that projects pop up very late at night, so he is in the habit of waking every three hours or so to check his queue. When a task is there, he’ll stay awake as long as he can to work. Once, he stayed up 36 hours straight labeling elbows and knees and heads in photographs of crowds — he has no idea why. Another time, he stayed up so long his mother asked him what was wrong with his eyes. He looked in the mirror to discover they were swollen.
Annotators generally know only that they are training AI for companies located vaguely elsewhere, but sometimes the veil of anonymity drops — instructions mentioning a brand or a chatbot say too much. “I read and I Googled and found I am working for a 25-year-old billionaire,” said one worker, who, when we spoke, was labeling the emotions of people calling to order Domino’s pizza. “I really am wasting my life here if I made somebody a billionaire and I’m earning a couple of bucks a week.” Victor is a self-proclaimed “fanatic” about AI and started annotating because he wants to help bring about a fully automated post-work future. But earlier this year, someone dropped a Time story into one of his WhatsApp groups about workers training ChatGPT to recognize toxic content who were getting paid less than $2 an hour by the vendor Sama AI. “People were angry that these companies are so profitable but paying so poorly,” Victor said. He was unaware until I told him about Remotasks’ connection to Scale. Instructions for one of the tasks he worked on were nearly identical to those used by OpenAI, which meant he had likely been training ChatGPT as well, for approximately $3 per hour.
“I remember that someone posted that we will be remembered in the future,” he said. “And somebody else replied, ‘We are being treated worse than foot soldiers. We will be remembered nowhere in the future.’ I remember that very well. Nobody will recognize the work we did or the effort we put in.” Identifying clothing and labeling customer-service conversations are just some of the annotation gigs available. Lately, the hottest on the market has been chatbot trainer. Because it demands specific areas of expertise or language fluency and wages are often adjusted regionally, this job tends to pay better. Certain types of specialist annotation can go for $50 or more per hour.
A woman I’ll call Anna was searching for a job in Texas when she stumbled across a generic listing for online work and applied. It was Remotasks, and after passing an introductory exam, she was brought into a Slack room of 1,500 people who were training a project code-named Dolphin, which she later discovered to be Google DeepMind’s chatbot, Sparrow, one of the many bots competing with ChatGPT. Her job is to talk with it all day. At about $14 an hour, plus bonuses for high productivity, “it definitely beats getting paid $10 an hour at the local Dollar General store,” she said.
Also, she enjoys it. She has discussed science-fiction novels, mathematical paradoxes, children’s riddles, and TV shows. Sometimes the bot’s responses make her laugh; other times, she runs out of things to talk about. “Some days, my brain is just like, I literally have no idea what on earth to ask it now, ” she said. “So I have a little notebook, and I’ve written about two pages of things — I just Google interesting topics — so I think I’ll be good for seven hours today, but that’s not always the case.” Each time Anna prompts Sparrow, it delivers two responses and she picks the best one, thereby creating something called “human-feedback data.” When ChatGPT debuted late last year, its impressively natural-seeming conversational style was credited to its having been trained on troves of internet data. But the language that fuels ChatGPT and its competitors is filtered through several rounds of human annotation. One group of contractors writes examples of how the engineers want the bot to behave, creating questions followed by correct answers, descriptions of computer programs followed by functional code, and requests for tips on committing crimes followed by polite refusals. After the model is trained on these examples, yet more contractors are brought in to prompt it and rank its responses. This is what Anna is doing with Sparrow. Exactly which criteria the raters are told to use varies — honesty, or helpfulness, or just personal preference. The point is that they are creating data on human taste, and once there’s enough of it, engineers can train a second model to mimic their preferences at scale, automating the ranking process and training their AI to act in ways humans approve of. The result is a remarkably human-seeming bot that mostly declines harmful requests and explains its AI nature with seeming self-awareness.
Put another way, ChatGPT seems so human because it was trained by an AI that was mimicking humans who were rating an AI that was mimicking humans who were pretending to be a better version of an AI that was trained on human writing.
This circuitous technique is called “reinforcement learning from human feedback,” or RLHF, and it’s so effective that it’s worth pausing to fully register what it doesn’t do. When annotators teach a model to be accurate, for example, the model isn’t learning to check answers against logic or external sources or about what accuracy as a concept even is. The model is still a text-prediction machine mimicking patterns in human writing, but now its training corpus has been supplemented with bespoke examples, and the model has been weighted to favor them. Maybe this results in the model extracting patterns from the part of its linguistic map labeled as accurate and producing text that happens to align with the truth, but it can also result in it mimicking the confident style and expert jargon of the accurate text while writing things that are totally wrong. There is no guarantee that the text the labelers marked as accurate is in fact accurate, and when it is, there is no guarantee that the model learns the right patterns from it.
This dynamic makes chatbot annotation a delicate process. It has to be rigorous and consistent because sloppy feedback, like marking material that merely sounds correct as accurate, risks training models to be even more convincing bullshitters. An early OpenAI and DeepMind joint project using RLHF, in this case to train a virtual robot hand to grab an item, resulted in also training the robot to position its hand between the object and its raters and wiggle around such that it only appeared to its human overseers to grab the item. Ranking a language model’s responses is always going to be somewhat subjective because it’s language. A text of any length will have multiple elements that could be right or wrong or, taken together, misleading. OpenAI researchers ran into this obstacle in another early RLHF paper. Trying to get their model to summarize text, the researchers found they agreed only 60 percent of the time that a summary was good. “Unlike many tasks in [machine learning] our queries do not have unambiguous ground truth,” they lamented.
When Anna rates Sparrow’s responses, she’s supposed to be looking at their accuracy, helpfulness, and harmlessness while also checking that the model isn’t giving medical or financial advice or anthropomorphizing itself or running afoul of other criteria. To be useful training data, the model’s responses have to be quantifiably ranked against one another: Is a bot that helpfully tells you how to make a bomb “better” than a bot that’s so harmless it refuses to answer any questions? In one DeepMind paper, when Sparrow’s makers took a turn annotating, four researchers wound up debating whether their bot had assumed the gender of a user who asked it for relationship advice. According to Geoffrey Irving, one of DeepMind’s research scientists, the company’s researchers hold weekly annotation meetings in which they rerate data themselves and discuss ambiguous cases, consulting with ethical or subject-matter experts when a case is particularly tricky.
There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads.
Anna often finds herself having to choose between two bad options. “Even if they’re both absolutely, ridiculously wrong, you still have to figure out which one is better and then write words explaining why,” she said. Sometimes, when both responses are bad, she’s encouraged to write a better response herself, which she does about half the time.
Because feedback data is difficult to collect, it fetches a higher price. Basic preferences of the sort Anna is producing sell for about $1 each, according to people with knowledge of the industry. But if you want to train a model to do legal research, you need someone with training in law, and this gets expensive. Everyone involved is reluctant to say how much they’re spending, but in general, specialized written examples can go for hundreds of dollars, while expert ratings can cost $50 or more. One engineer told me about buying examples of Socratic dialogues for up to $300 a pop. Another told me about paying $15 for a “darkly funny limerick about a goldfish.” OpenAI, Microsoft, Meta, and Anthropic did not comment about how many people contribute annotations to their models, how much they are paid, or where in the world they are located. Irving of DeepMind, which is a subsidiary of Google, said the annotators working on Sparrow are paid “at least the hourly living wage” based on their location. Anna knows “absolutely nothing” about Remotasks, but Sparrow has been more open. She wasn’t the only annotator I spoke with who got more information from the AI they were training than from their employer; several others learned whom they were working for by asking their AI for its company’s terms of service. “I literally asked it, ‘What is your purpose, Sparrow?’” Anna said. It pulled up a link to DeepMind’s website and explained that it’s an AI assistant and that its creators trained it using RLHF to be helpful and safe.
Until recently, it was relatively easy to spot bad output from a language model. It looked like gibberish. But this gets harder as the models get better — a problem called “scalable oversight.” Google inadvertently demonstrated how hard it is to catch the errors of a modern-language model when one made it into the splashy debut of its AI assistant, Bard. (It stated confidently that the James Webb Space Telescope “took the very first pictures of a planet outside of our own solar system,” which is wrong.) This trajectory means annotation increasingly requires specific skills and expertise.
Last year, someone I’ll call Lewis was working on Mechanical Turk when, after completing a task, he received a message inviting him to apply for a platform he hadn’t heard of. It was called Taskup.ai, and its website was remarkably basic: just a navy background with text reading GET PAID FOR TASKS ON DEMAND. He applied.
The work paid far better than anything he had tried before, often around $30 an hour. It was more challenging, too: devising complex scenarios to trick chatbots into giving dangerous advice, testing a model’s ability to stay in character, and having detailed conversations about scientific topics so technical they required extensive research. He found the work “satisfying and stimulating.” While checking one model’s attempts to code in Python, Lewis was learning too. He couldn’t work for more than four hours at a stretch, lest he risk becoming mentally drained and making mistakes, and he wanted to keep the job.
“If there was one thing I could change, I would just like to have more information about what happens on the other end,” he said. “We only know as much as we need to know to get work done, but if I could know more, then maybe I could get more established and perhaps pursue this as a career.” I spoke with eight other workers, most based in the U.S., who had similar experiences of answering surveys or completing tasks on other platforms and finding themselves recruited for Taskup.ai or several similarly generic sites, such as DataAnnotation.tech or Gethybrid.io. Often their work involved training chatbots, though with higher-quality expectations and more specialized purposes than other sites they had worked for. One was demonstrating spreadsheet macros. Another was just supposed to have conversations and rate responses according to whatever criteria she wanted. She often asked the chatbot things that had come up in conversations with her 7-year-old daughter, like “What is the largest dinosaur?” and “Write a story about a tiger.” “I haven’t fully gotten my head around what they’re trying to do with it,” she told me.
Taskup.ai, DataAnnotation.tech, and Gethybrid.io all appear to be owned by the same company: Surge AI. Its CEO, Edwin Chen, would neither confirm nor deny the connection, but he was willing to talk about his company and how he sees annotation evolving.
“I’ve always felt the annotation landscape is overly simplistic,” Chen said over a video call from Surge’s office. He founded Surge in 2020 after working on AI at Google, Facebook, and Twitter convinced him that crowdsourced labeling was inadequate. “We want AI to tell jokes or write really good marketing copy or help me out when I need therapy or whatnot,” Chen said. “You can’t ask five people to independently come up with a joke and combine it into a majority answer. Not everybody can tell a joke or solve a Python program. The annotation landscape needs to shift from this low-quality, low-skill mind-set to something that’s much richer and captures the range of human skills and creativity and values that we want AI systems to possess.” Last year, Surge relabeled Google’s dataset classifying Reddit posts by emotion. Google had stripped each post of context and sent them to workers in India for labeling. Surge employees familiar with American internet culture found that 30 percent of the labels were wrong. Posts like “hell yeah my brother” had been classified as annoyance and “Yay, cold McDonald’s. My favorite” as love.
Surge claims to vet its workers for qualifications — that people doing creative-writing tasks have experience with creative writing, for example — but exactly how Surge finds workers is “proprietary,” Chen said. As with Remotasks, workers often have to complete training courses, though unlike Remotasks, they are paid for it, according to the annotators I spoke with. Having fewer, better-trained workers producing higher-quality data allows Surge to compensate better than its peers, Chen said, though he declined to elaborate, saying only that people are paid “fair and ethical wages.” The workers I spoke with earned between $15 and $30 per hour, but they are a small sample of all the annotators, a group Chen said now consists of 100,000 people. The secrecy, he explained, stems from clients’ demands for confidentiality.
Surge’s customers include OpenAI, Google, Microsoft, Meta, and Anthropic. Surge specializes in feedback and language annotation, and after ChatGPT launched, it got an influx of requests, Chen said: “I thought everybody knew the power of RLHF, but I guess people just didn’t viscerally understand.” The new models are so impressive they’ve inspired another round of predictions that annotation is about to be automated. Given the costs involved, there is significant financial pressure to do so. Anthropic, Meta, and other companies have recently made strides in using AI to drastically reduce the amount of human annotation needed to guide models, and other developers have started using GPT-4 to generate training data. However, a recent paper found that GPT-4-trained models may be learning to mimic GPT’s authoritative style with even less accuracy, and so far, when improvements in AI have made one form of annotation obsolete, demand for other, more sophisticated types of labeling has gone up. This debate spilled into the open earlier this year, when Scale’s CEO, Wang, tweeted that he predicted AI labs will soon be spending as many billions of dollars on human data as they do on computing power; OpenAI’s CEO, Sam Altman, responded that data needs will decrease as AI improves.
“I mean, what it can do is amazing,” she said of the chatbot. “But it still does some really weird shit.” Chen is skeptical AI will reach a point where human feedback is no longer needed, but he does see annotation becoming more difficult as models improve. Like many researchers, he believes the path forward will involve AI systems helping humans oversee other AI. Surge recently collaborated with Anthropic on a proof of concept, having human labelers answer questions about a lengthy text with the help of an unreliable AI assistant, on the theory that the humans would have to feel out the weaknesses of their AI assistant and collaborate to reason their way to the correct answer. Another possibility has two AIs debating each other and a human rendering the final verdict on which is correct. “We still have yet to see really good practical implementations of this stuff, but it’s starting to become necessary because it’s getting really hard for labelers to keep up with the models,” said OpenAI research scientist John Schulman in a recent talk at Berkeley.
“I think you always need a human to monitor what AIs are doing just because they are this kind of alien entity,” Chen said. Machine-learning systems are just too strange ever to fully trust. The most impressive models today have what, to a human, seems like bizarre weaknesses, he added, pointing out that though GPT-4 can generate complex and convincing prose, it can’t pick out which words are adjectives: “Either that or models get so good that they’re better than humans at all things, in which case, you reach your utopia and who cares?” As 2022 ended, Joe started hearing from his students that their task queues were often empty. Then he got an email informing him the boot camps in Kenya were closing. He continued training taskers online, but he began to worry about the future.
“There were signs that it was not going to last long,” he said. Annotation was leaving Kenya. From colleagues he had met online, he heard tasks were going to Nepal, India, and the Philippines. “The companies shift from one region to another,” Joe said. “They don’t have infrastructure locally, so it makes them flexible to shift to regions that favor them in terms of operation cost.” One way the AI industry differs from manufacturers of phones and cars is in its fluidity. The work is constantly changing, constantly getting automated away and replaced with new needs for new types of data. It’s an assembly line but one that can be endlessly and instantly reconfigured, moving to wherever there is the right combination of skills, bandwidth, and wages.
Lately, the best-paying work is in the U.S. In May, Scale started listing annotation jobs on its own website, soliciting people with experience in practically every field AI is predicted to conquer. There were listings for AI trainers with expertise in health coaching, human resources, finance, economics, data science, programming, computer science, chemistry, biology, accounting, taxes, nutrition, physics, travel, K-12 education, sports journalism, and self-help. You can make $45 an hour teaching robots law or make $25 an hour teaching them poetry. There were also listings for people with security clearance, presumably to help train military AI. Scale recently launched a defense-oriented language model called Donovan, which Wang called “ammunition in the AI war,” and won a contract to work on the Army’s robotic-combat-vehicle program.
Anna is still training chatbots in Texas. Colleagues have been turned into reviewers and Slack admins — she isn’t sure why, but it has given her hope that the gig could be a longer-term career. One thing she isn’t worried about is being automated out of a job. “I mean, what it can do is amazing,” she said of the chatbot. “But it still does some really weird shit.” When Remotasks first arrived in Kenya, Joe thought annotation could be a good career. Even after the work moved elsewhere, he was determined to make it one. There were thousands of people in Nairobi who knew how to do the work, he reasoned — he had trained many of them, after all. Joe rented office space in the city and began sourcing contracts: a job annotating blueprints for a construction company, another labeling fruits despoiled by insects for some sort of agricultural project, plus the usual work of annotating for self-driving cars and e-commerce.
But he has found his vision difficult to achieve. He has just one full-time employee, down from two. “We haven’t been having a consistent flow of work,” he said. There are weeks with nothing to do because customers are still collecting data, and when they’re done, he has to bring in short-term contractors to meet their deadlines: “Clients don’t care whether we have consistent work or not. So long as the datasets have been completed, then that’s the end of that.” Rather than let their skills go to waste, other taskers decided to chase the work wherever it went. They rented proxy servers to disguise their locations and bought fake IDs to pass security checks so they could pretend to work from Singapore, the Netherlands, Mississippi, or wherever the tasks were flowing. It’s a risky business. Scale has become increasingly aggressive about suspending accounts caught disguising their location, according to multiple taskers. It was during one of these crackdowns that my account got banned, presumably because I had been using a VPN to see what workers in other countries were seeing, and all $1.50 or so of my earnings were seized.
“These days, we have become a bit cunning because we noticed that in other countries they are paying well,” said Victor, who was earning double the Kenyan rate by tasking in Malaysia. “You do it cautiously.” Another Kenyan annotator said that after his account got suspended for mysterious reasons, he decided to stop playing by the rules. Now, he runs multiple accounts in multiple countries, tasking wherever the pay is best. He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other.
Sam Altman fired as CEO of OpenAI OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Features The end of the Googleverse Why it’s impossible to compete with Google Search President Joe Biden wanted Gigi Sohn to fix America’s internet — what went wrong? The greatest tech books of all time Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
262 | 2,023 | "OpenAI has published the text-generating AI it said was too dangerous to share - The Verge" | "https://www.theverge.com/2019/11/7/20953040/openai-text-generation-ai-gpt-2-full-model-release-1-5b-parameters" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech / Artificial Intelligence OpenAI has published the text-generating AI it said was too dangerous to share OpenAI has published the text-generating AI it said was too dangerous to share / The lab says it’s seen ‘no strong evidence of misuse so far’ By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story The research lab OpenAI has released the full version of a text-generating AI system that experts warned could be used for malicious purposes.
The institute originally announced the system, GPT-2, in February this year , but withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. Since then it’s released smaller, less complex versions of GPT-2 and studied their reception. Others also replicated the work. In a blog post this week, OpenAI now says it’s seen “no strong evidence of misuse” and has released the model in full.
GPT-2 can write fake news articles, stories, poems, and code GPT-2 is part of a new breed of text-generation systems that have impressed experts with their ability to generate coherent text from minimal prompts. The system was trained on eight million text documents scraped from the web and responds to text snippets supplied by users. Feed it a fake headline, for example, and it will write a news story; give it the first line of a poem and it’ll supply a whole verse.
It’s tricky to convey exactly how good GPT-2’s output is, but the model frequently produces eerily cogent writing that can often give the appearance of intelligence (though that’s not to say what GPT-2 is doing involves anything we’d recognize as cognition). Play around with the system long enough, though, and its limitations become clear. It particularly suffers with the challenge of long-term coherence; for example, using the names and attributes of characters consistently in a story, or sticking to a single subject in a news article.
The best way to get a feel for GPT-2’s abilities is to try it out yourself. You can access a web version at TalkToTransformer.com and enter your own prompts. (A “transformer” is a component of machine learning architecture used to create GPT-2 and its fellows.) 1/6 1/6 Apart from the raw capabilities of GPT-2, the model’s release is notable as part of an ongoing debate about the responsibility of AI researchers to mitigate harm caused by their work. Experts have pointed out that easy access to cutting-edge AI tools can enable malicious actors; a dynamic we’ve seen with the use of deepfakes to generate revenge porn, for example. OpenAI limited the release of its model because of this concern.
However, not everyone applauded the lab’s approach. Many experts criticized the decision , saying it limited the amount of research others could do to mitigate the model’s harms, and that it created unnecessary hype about the dangers of artificial intelligence.
“The words ‘too dangerous’ were casually thrown out here without a lot of thought or experimentation,” researcher Delip Rao told The Verge back in February. “I don’t think [OpenAI] spent enough time proving it was actually dangerous.” Related There’s a subreddit populated entirely by AI personifications of other subreddits AI researchers debate the ethics of sharing potentially harmful programs This AI-powered autocompletion software is Gmail’s Smart Compose for coders In its announcement of the full model this week, OpenAI noted that GPT-2 could be misused, citing third-party research stating the system could help generate “synthetic propaganda” for extreme ideological positions. But it also admitted that its fears that the system would be used to pump out a high-volume of coherent spam, overwhelming online information systems like social media, have not yet come to pass.
The lab also noted that its own researchers had created automatic systems that could spot GPT-2’s output with ~95% accuracy, but that this figure was not high enough “for standalone detection” and means any system used to automatically spot fake text would need to be paired with human judges. This, though, is not particularly unusual for such moderation tasks, which often rely on humans in the loop to spot fake images and videos.
OpenAI says it will continue to watch how GPT-2 is used by the community and public, and will further develop its policies on the responsible publication of AI research.
Sam Altman fired as CEO of OpenAI OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech Amazon, Microsoft, and India crack down on tech support scams Amazon eliminated plastic packaging at one of its warehouses Amazon has renewed Gen V for a sophomore season Universal Music sues AI company Anthropic for distributing song lyrics Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
263 | 2,023 | "The Chatbots May Poison Themselves - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/06/generative-ai-future-training-models/674478" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce AI Is an Existential Threat to Itself Generative-AI programs may eventually consume material that was created by other machines—with disastrous consequences.
In the beginning, the chatbots and their ilk fed on the human-made internet. Various generative-AI models of the sort that power ChatGPT got their start by devouring data from sites including Wikipedia, Getty , and Scribd. They consumed text, images, and other content, learning through algorithmic digestion their flavors and texture, which ingredients go well together and which do not, in order to concoct their own art and writing. But this feast only whet their appetite.
Generative AI is utterly reliant on the sustenance it gets from the web: Computers mime intelligence by processing almost unfathomable amounts of data and deriving patterns from them. ChatGPT can write a passable high-school essay because it has read libraries’ worth of digitized books and articles, while DALL-E 2 can produce Picasso-esque images because it has analyzed something like the entire trajectory of art history. The more they train on, the smarter they appear.
Eventually, these programs will have ingested almost every human-made bit of digital material.
And they are already being used to engorge the web with their own machine-made content, which will only continue to proliferate—across TikTok and Instagram, on the sites of media outlets and retailers , and even in academic experiments.
To develop ever more advanced AI products, Big Tech might have no choice but to feed its programs AI-generated content, or just might not be able to sift human fodder from the synthetic—a potentially disastrous change in diet for both the models and the internet, according to researchers.
Read: AI doomerism is a decoy The problem with using AI output to train future AI is straightforward. Despite stunning advances, chatbots and other generative tools such as the image-making Midjourney and Stable Diffusion remain sometimes shockingly dysfunctional—their outputs filled with biases, falsehoods, and absurdities. “Those mistakes will migrate into” future iterations of the programs, Ilia Shumailov, a machine-learning researcher at Oxford University, told me. “If you imagine this happening over and over again, you will amplify errors over time.” In a recent study on this phenomenon, which has not been peer-reviewed, Shumailov and his co-authors describe the conclusion of those amplified errors as model collapse : “a degenerative process whereby, over time, models forget,” almost as if they were growing senile. (The authors originally called the phenomenon “model dementia,” but renamed it after receiving criticism for trivializing human dementia.) Generative AI produces outputs that, based on its training data, are most probable. (For instance, ChatGPT will predict that, in a greeting, doing? is likely to follow how are you.
) That means events that seem to be less probable, whether because of flaws in an algorithm or a training sample that doesn’t adequately reflect the real world—unconventional word choices, strange shapes, images of people with darker skin (melanin is often scant in image datasets)—will not show up as much in the model’s outputs, or will show up with deep flaws. Each successive AI trained on past AI would lose information on improbable events and compound those errors, Aditi Raghunathan, a computer scientist at Carnegie Mellon University, told me. You are what you eat.
Recursive training could magnify bias and error, as previous research also suggests—chatbots trained on the writings of a racist chatbot, such as early versions of ChatGPT that racially profiled Muslim men as “terrorists,” would only become more prejudiced. And if taken to an extreme, such recursion would also degrade an AI model’s most basic functions. As each generation of AI misunderstands or forgets underrepresented concepts, it will become overconfident about what it does know. Eventually, what the machine deems “probable” will begin to look incoherent to humans, Nicolas Papernot, a computer scientist at the University of Toronto and one of Shumailov’s co-authors, told me.
The study tested how model collapse would play out in various AI programs—think GPT-2 trained on the outputs of GPT-1, GPT-3 on the outputs of GPT-2, GPT-4 on the outputs of GPT-3, and so on, until the nth generation. A model that started out producing a grid of numbers displayed an array of blurry zeroes after 20 generations; a model meant to sort data into two groups eventually lost the ability to distinguish between them at all, producing a single dot after 2,000 generations. The study provides a “nice, concrete way of demonstrating what happens” with such a data feedback loop, Raghunathan, who was not involved with the research, said. The AIs gobbled up one another’s outputs, and in turn one another, a sort of recursive cannibalism that left nothing of use or substance behind—these are not Shakespeare’s anthropophagi , or human-eaters, so much as mechanophagi of Silicon Valley’s design.
The language model they tested, too, completely broke down. The program at first fluently finished a sentence about English Gothic architecture, but after nine generations of learning from AI-generated data, it responded to the same prompt by spewing gibberish: “architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.” For a machine to create a functional map of a language and its meanings, it must plot every possible word, regardless of how common it is. “In language, you have to model the distribution of all possible words that may make up a sentence,” Papernot said. “Because there is a failure [to do so] over multiple generations of models, it converges to outputting nonsensical sequences.” In other words, the programs could only spit back out a meaningless average—like a cassette that, after being copied enough times on a tape deck, sounds like static. As the science-fiction author Ted Chiang has written , if ChatGPT is a condensed version of the internet, akin to how a JPEG file compresses a photograph, then training future chatbots on ChatGPT’s output is “the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.” The risk of eventual model collapse does not mean the technology is worthless or fated to poison itself. Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the National AI Institute for Foundations of Machine Learning, which is sponsored by the National Science Foundation, pointed to privacy and copyright concerns as potential reasons to train AI on synthetic data. Consider medical applications: Using real patients’ medical information to train AI poses huge privacy violations that using representative synthetic records could bypass—say, by taking a collection of people’s records and using a computer program to generate a new dataset that, in the aggregate, contains the same information. To take another example, limited training material is available in rare languages, but a machine-learning program could produce permutations of what is available to augment the dataset.
Read: ChatGPT is already obsolete The potential for AI-generated data to result in model collapse, then, emphasizes the need to curate training datasets. “Filtering is a whole research area right now,” Dimakis told me. “And we see it has a huge impact on the quality of the models”—given enough data, a program trained on a smaller amount of high-quality inputs can outperform a bloated one. Just as synthetic data aren’t inherently bad, “human-generated data is not a gold standard,” Ilia Shumailov said. “We need data that represents the underlying distribution well.” Human and machine outputs are just as likely to be misaligned with reality (many existing discriminatory AI products were trained on human creations). Researchers could potentially curate AI-generated data to alleviate bias and other problems, by training their models on more representative data. Using AI to generate text or images that counterbalance prejudice in existing datasets and computer programs, for instance, could provide a way to “potentially debias systems by using this controlled generation of data,” Aditi Raghunathan said.
A model that is shown to have dramatically collapsed to the extent that Shumailov and Papernot documented would never be released as a product, anyway. Of greater concern is the compounding of smaller, hard-to-detect biases and misperceptions—especially as machine-made content becomes harder , if not impossible, to distinguish from human creations. “I think the danger is really more when you train on the synthetic data and as a result have some flaws that are so subtle that our current evaluation pipelines do not capture them,” Raghunathan said. Gender bias in a résumé-screening tool, for instance, could, in a subsequent generation of the program, morph into more insidious forms. The chatbots might not eat themselves so much as leach undetectable traces of cybernetic lead that accumulate across the internet with time, poisoning not just their own food and water supply, but humanity’s.
" |
264 | 2,023 | "What Happens When AI Has Read Everything? - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/01/artificial-intelligence-ai-chatgpt-dall-e-2-learning/672754" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
What Happens When AI Has Read Everything? The dream of an artificial mind may never become a reality if AI runs out of quality prose to ingest—and there isn’t much left.
Artificial intelligence has in recent years proved itself to be a quick study, although it is being educated in a manner that would shame the most brutal headmaster. Locked into airtight Borgesian libraries for months with no bathroom breaks or sleep, AIs are told not to emerge until they’ve finished a self-paced speed course in human culture. On the syllabus: a decent fraction of all the surviving text that we have ever produced.
When AIs surface from these epic study sessions, they possess astonishing new abilities. People with the most linguistically supple minds—hyperpolyglots—can reliably flip back and forth between a dozen languages; AIs can now translate between more than 100 in real time. They can churn out pastiche in a range of literary styles and write passable rhyming poetry. DeepMind’s Ithaca AI can glance at Greek letters etched into marble and guess the text that was chiseled off by vandals thousands of years ago.
These successes suggest a promising way forward for AI’s development: Just shovel ever-larger amounts of human-created text into its maw, and wait for wondrous new skills to manifest. With enough data, this approach could perhaps even yield a more fluid intelligence, or a humanlike artificial mind akin to those that haunt nearly all of our mythologies of the future.
The trouble is that, like other high-end human cultural products, good prose ranks among the most difficult things to produce in the known universe. It is not in infinite supply, and for AI, not any old text will do: Large language models trained on books are much better writers than those trained on huge batches of social-media posts. (It’s best not to think about one’s Twitter habit in this context.) When we calculate how many well-constructed sentences remain for AI to ingest, the numbers aren’t encouraging. A team of researchers led by Pablo Villalobos at Epoch AI recently predicted that programs such as the eerily impressive ChatGPT will run out of high-quality reading material by 2027. Without new text to train on, AI’s recent hot streak could come to a premature end.
It should be noted that only a slim fraction of humanity’s total linguistic creativity is available for reading. More than 100,000 years have passed since radically creative Africans transcended the emotive grunts of our animal ancestors and began externalizing their thoughts into extensive systems of sounds. Every notion expressed in those protolanguages—and many languages that followed—is likely lost for all time, although it gives me pleasure to imagine that a few of their words are still with us. After all, some English words have a shockingly ancient vintage: Flow , mother , fire , and ash come down to us from Ice Age peoples.
Writing has allowed human beings to capture and store a great many more of our words. But like most new technologies, writing was expensive at first, which is why it was initially used primarily for accounting. It took time to bake and dampen clay for your stylus, to cut papyrus into strips fit to be latticed, to house and feed the monks who inked calligraphy onto vellum. These resource-intensive techniques could preserve only a small sampling of humanity’s cultural output.
Not until the printing press began machine-gunning books into the world did our collective textual memory achieve industrial scale. Researchers at Google Books estimate that since Gutenberg, humans have published more than 125 million titles, collecting laws, poems, myths, essays, histories, treatises, and novels. The Epoch team estimates that 10 million to 30 million of these books have already been digitized, giving AIs a reading feast of hundreds of billions of, if not more than a trillion, words.
Read: The end of high-school English Those numbers may sound impressive, but they’re within range of the 500 billion words that trained the model that powers ChatGPT. Its successor, GPT-4, might be trained on tens of trillions of words.
Rumors suggest that when GPT-4 is released later this year, it will be able to generate a 60,000-word novel from a single prompt.
Ten trillion words is enough to encompass all of humanity’s digitized books, all of our digitized scientific papers, and much of the blogosphere. That’s not to say that GPT-4 will have read all of that material, only that doing so is well within its technical reach. You could imagine its AI successors absorbing our entire deep-time textual record across their first few months, and then topping up with a two-hour reading vacation each January, during which they could mainline every book and scientific paper published the previous year.
Just because AIs will soon be able to read all of our books doesn’t mean they can catch up on all of the text we produce. The internet’s storage capacity is of an entirely different order, and it’s a much more democratic cultural-preservation technology than book publishing. Every year, billions of people write sentences that are stockpiled in its databases, many owned by social-media platforms.
Random text scraped from the internet generally doesn’t make for good training data, with Wikipedia articles being a notable exception. But perhaps future algorithms will allow AIs to wring sense from our aggregated tweets, Instagram captions, and Facebook statuses. Even so, these low-quality sources won’t be inexhaustible. According to Villalobos, within a few decades, speed-reading AIs will be powerful enough to ingest hundreds of trillions of words—including all those that human beings have so far stuffed into the web.
Not every AI is an English major. Some are visual learners, and they too may one day face a training-data shortage. While the speed-readers were bingeing the literary canon, these AIs were strapped down with their eyelids held open, Clockwork Orange –style, for a forced screening comprising millions of images. They emerged from their training with superhuman vision. They can recognize your face behind a mask, or spot tumors that are invisible to the radiologist’s eye. On night drives, they can see into the gloomy roadside ahead where a young fawn is working up the nerve to chance a crossing.
Most impressive, AIs trained on labeled pictures have begun to develop a visual imagination.
OpenAI’s DALL-E 2 was trained on 650 million images, each paired with a text label. DALL-E 2 has seen the ocher handprints that Paleolithic humans pressed onto cave ceilings. It can emulate the different brushstroke styles of Renaissance masters. It can conjure up photorealistic macros of strange animal hybrids. An animator with world-building chops can use it to generate a Pixar-style character, and then surround it with a rich and distinctive environment.
Read: Generative art is stupid Thanks to our tendency to post smartphone pics on social media, human beings produce a lot of labeled images, even if the label is just a short caption or geotag. As many as 1 trillion such images are uploaded to the internet every year, and that doesn’t include YouTube videos, each of which is a series of stills. It’s going to take a long time for AIs to sit through our species’ collective vacation-picture slideshow, to say nothing of our entire visual output. According to Villalobos, our training-image shortage won’t be acute until sometime between 2030 and 2060.
If indeed AIs are starving for new inputs by midcentury—or sooner, in the case of text—the field’s data-powered progress may slow considerably, putting artificial minds and all the rest out of reach. I called Villalobos to ask him how we might increase human cultural production for AI. “There may be some new sources coming online,” he told me. “The widespread adoption of self-driving cars would result in an unprecedented amount of road video recordings.” Villalobos also mentioned “synthetic” training data created by AIs. In this scenario, large language models would be like the proverbial monkeys with typewriters, only smarter and possessed of functionally infinite energy. They could pump out billions of new novels, each of Tolstoyan length. Image generators could likewise create new training data by tweaking existing snapshots, but not so much that they fall afoul of their labels. It’s not yet clear whether AIs will learn anything new by cannibalizing data that they themselves create. Perhaps doing so will only dilute the predictive potency they gleaned from human-made text and images. “People haven’t used a lot of this stuff, because we haven’t yet run out of data,” Jaime Sevilla, one of Villalobos’s colleagues, told me.
Villalobos’s paper discusses a more unsettling set of speculative work-arounds. We could, for instance, all wear dongles around our necks that record our every speech act. According to one estimate, people speak 5,000 to 20,000 words a day on average. Across 8 billion people, those pile up quickly. Our text messages could also be recorded and stripped of identifying metadata. We could subject every white-collar worker to anonymized keystroke recording, and firehose what we capture into giant databases to be fed into our AIs. Villalobos noted drily that fixes such as these are currently “well outside the Overton window.” Perhaps in the end, big data will have diminishing returns. Just because our most recent AI winter was thawed out by giant gobs of text and imagery doesn’t mean our next one will be. Maybe instead, it will be an algorithmic breakthrough or two that at last populate our world with artificial minds. After all, we know that nature has authored its own modes of pattern recognition, and that so far, they outperform even our best AIs. My 13-year-old son has ingested orders of magnitude fewer words than ChatGPT, yet he has a much more subtle understanding of written text. If it makes sense to say that his mind runs on algorithms, they’re better algorithms than those used by today’s AIs.
Read: Five remarkable chats that will help you understand ChatGPT If, however, our data-gorging AIs do someday surpass human cognition, we will have to console ourselves with the fact that they are made in our image. AIs are not aliens. They are not the exotic other. They are of us, and they are from here. They have gazed upon the Earth’s landscapes. They have seen the sun setting on its oceans billions of times. They know our oldest stories. They use our names for the stars. Among the first words they learn are flow , mother , fire , and ash.
" |
265 | 2,023 | "Automatic gender recognition tech is dangerous, say campaigners: it’s time to ban it - The Verge" | "https://www.theverge.com/2021/4/14/22381370/automatic-gender-recognition-sexual-orientation-facial-ai-analysis-ban-campaign" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Tech Automatic gender recognition tech is dangerous, say campaigners: it’s time to ban it Simplistic gender binaries infringe on the right to self-expression By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story Dangers posed by facial recognition like mass surveillance and mistaken identity have been widely discussed in recent years. But digital rights groups say an equally insidious use case is currently sneaking under the radar: using the same technology to predict someone’s gender and sexual orientation. Now, a new campaign has launched to ban these applications in the EU.
Trying to predict someone’s gender or sexuality from digitized clues is fundamentally flawed, says Os Keyes, a researcher who’s written extensively on the topic. This technology tends to reduce gender to a simplistic binary and, as a result, is often harmful to individuals like trans and nonbinary people who might not fit into these narrow categories. When the resulting systems are used for things like gating entry for physical spaces or verifying someone’s identity for an online service, it leads to discrimination.
“Identifying someone’s gender by looking at them and not talking to them is sort of like asking what does the smell of blue taste like,” Keyes tells The Verge.
“The issue is not so much that your answer is wrong as your question doesn’t make any sense.” These predictions can be made using a variety of inputs, from analyzing someone’s voice to aggregating their shopping habits. But the rise of facial recognition has given companies and researchers a new data input they believe is particularly authoritative.
“These systems don’t just fail to recognize that trans people exist they literally can’t recognize that trans people exist.” Commercial facial recognition systems, including those sold by big tech companies like Amazon and Microsoft, frequently offer gender classification as a standard feature. Predicting sexual orientation from the same data is much rarer, but researchers have still built such systems , most notably the so-called “AI gaydar” algorithm. There’s strong evidence that this technology doesn’t work even on its own flawed premises, but that wouldn’t necessarily limit its adoption.
“Even the people who first researched it said, yes, some tinpot dictator could use this software to try and ‘find the queers’ and then throw them in a camp,” says Keyes of the algorithm to detect sexual orientation. “And that isn’t hyperbole. In Chechnya, that’s exactly what they’ve been doing , and that’s without the aid of robots.” In the case of automatic gender recognition, these systems generally rely on narrow and outmoded understandings of gender. With facial recognition tech, if someone has short hair, they’re categorized as a man; if they’re wearing makeup, they’re a woman. Similar assumptions are made based on biometric data like bone structure and face shape. The result is that people who don’t fit easily into these two categories — like many trans and nonbinary individuals — are misgendered. “These systems don’t just fail to recognize that trans people exist. They literally can’t recognize that trans people exist,” says Keyes.
Current applications of this gender recognition tech include digital billboards that analyze passersby to serve them targeted advertisements; digital spaces like “girls-only” social app Giggle, which admits people by guessing their gender from selfies ; and marketing stunts, like a campaign to give discounted subway tickets to women in Berlin to celebrate Equal Pay Day that tried to identify women based on facial scans. Researchers have also discussed much more potentially dangerous use cases, like deploying the technology to limit entry to gendered areas like bathrooms and locker rooms.
Being rejected by a machine in such a scenario has the potential to be not only humiliating and inconvenient, but to also trigger an even more severe reaction. Anti-trans attitudes and hysteria over access to bathrooms have already led to numerous incidents of harassment and violence in public toilets, as passersby take it upon themselves to police these spaces. If someone is publicly declared by a seemingly impartial machine to be the “wrong” gender, it would only seem to legitimize such harassment and violence.
Daniel Leufer, a policy analyst at digital rights group Access Now, which is leading the campaign to ban these applications, says this technology is incompatible with the EU’s commitment to human rights.
“If you live in a society committed to upholding these rights, then the only solution is a ban,” Leufer tells The Verge.
“Automatic gender recognition is completely at odds to the idea of people being able to express their gender identity outside the male-female binary or in a different way to the sex they were assigned at birth.” Automatic gender recognition is incompatible with self-expression, say campaigners Access Now, along with more than 60 other NGOs, has sent a letter to the European Commission, asking it to ban this technology. The campaign, which is supported by international LGBT+ advocacy group All Out, comes as the European Commission considers new regulations for AI across the EU. A draft white paper that circulated last year suggested a complete ban on facial recognition in public spaces was being considered, and Leufer says this illustrates how seriously the EU is taking the problem of AI regulation.
“There’s a unique moment right now with this legislation in the EU where we can call for major red lines, and we’re taking the opportunity to do that,” says Leufer. “The EU has consistently framed itself as taking a third path between China and the US [on AI regulation] with European values at its core, and we’re attempting to hold them to that.” Keyes points out that banning this technology should be of interest to everyone, “regardless of how they feel about the centrality of trans lives to their lives,” as these systems reinforce an extremely outdated mode of gender politics.
“When you look at what these researchers think, it’s like they’ve time-traveled from the 1950s,” says Keyes. “One system I saw used the example of advertising cars to males and pretty dresses to females. First of all, I want to know who’s getting stuck with the ugly dresses? And secondly, do they think women can’t drive?” The use of this technology can also be much more subtle than simply delivering different advertisements to men and women. Often, says Keyes, gender identification is used as a filter to produce outcomes that have nothing to do with gender itself.
For example, if a facial recognition algorithm is used to bar entry to a building or country by matching an individual to a database of faces, it might narrow down its search by filtering results by gender. Then, if the system misgenders the person in front of it, it will produce an invisible error that has nothing to do with the task at hand.
Algorithmic transparency would be needed to enforce a ban Keyes says this sort of application is deeply worrying because companies don’t share details of how their technology works. “This may already be ubiquitous in existing facial recognition systems, and we just can’t tell because they are entirely black-boxed,” they say. In 2018, for example, trans Uber drivers were kicked off the company’s app because of a security feature that asked them to verify their identity with a selfie. Why these individuals were rejected by the system isn’t clear, says Keyes, but it’s possible that faulty gender recognition played a part.
Ultimately, technology that tries to reduce the world to binary classifications based on simple heuristics is always going to fail when faced with the variety and complexity of human expression. Keyes acknowledges that gender recognition by machine does work for a large number of people but says the underlying flaws in the system will inevitably hurt those who are already marginalized by society and force everyone into narrower forms of self-expression.
“We already live in a society which is very heavily gendered and very visually gendered,” says Keyes. “What these technologies are doing is making those decisions a lot more efficient, a lot more automatic, and a lot more difficult to challenge.” Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Tech Amazon has renewed Gen V for a sophomore season Universal Music sues AI company Anthropic for distributing song lyrics FCC greenlights superfast Wi-Fi tethering for AR and VR headsets OpenAI is opening up DALL-E 3 access Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
266 | 2,023 | "Your Data Helped Build ChatGPT. Where’s Your Payout? - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/03/open-ai-products-labor-profit/673527" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce AI Is Exposing Who Really Has Power in Silicon Valley Your data helped build ChatGPT. Where’s your payout? Listen to this article 00:00 10:48 Listen to more stories on curio Silicon Valley churns out new products all the time, but rarely does one receive the level of hype that has surrounded the release of GPT-4. The follow-up to ChatGPT can ace standardized tests, tell you why a meme is funny , and even help do your taxes. Since the San Francisco start-up OpenAI introduced the technology earlier this month, it has been branded as “ remarkable but unsettling ,” and has led to grandiose statements about how “ things will never be the same.
” But actually trying out these features for yourself—or at least the ones that have already been publicly released—does not come cheap. Unlike ChatGPT, which captivated the world because it was free, GPT-4 is currently only available to non-developers through a premium service that costs $20 a month. OpenAI has lately made other moves to cash in on its products too. Last month, it announced a partnership with the consulting firm Bain & Company to help automate marketing campaigns and customer-service operations for its clients. And just a few weeks ago, the start-up announced a paid service that would allow other companies to integrate its technology into their own products, and Instacart, Snapchat, and Shopify have already done so.
By next year, OpenAI—a company that was basically unknown outside of tech just a few months ago— expects to rake in $1 billion in annual revenue. And it’s not the only company seeing dollar signs during this AI gold rush. Relatively new start-ups such as Anthropic now have billion-dollar valuations, while Alphabet and Meta have been breathlessly touting their AI investments. Every company wants an AI to call its own, just as they wanted social networks a decade ago or search engines in the decade before. And like those earlier technologies, AI tools can’t entirely be credited to corporate software engineers with six-figure salaries. Some of these products require invaluable labor from overseas workers who make far, far less, and every chatbot is created by ingesting books and content that have been published on the internet by a huge number of people. So in a sense, these tools were built by all of us.
The result is an uncomfortable disparity between who does the work that enables these AI models to function and who gets to control and profit from them. This sort of disparity is nothing new in Silicon Valley, but the development of AI is shifting power further away from those at the bottom at a time when layoffs have already resulted in a sense of wide-ranging precarity for the tech industry. Overseas workers won’t reap any of these profits, nor will the people who might have aspects of their work—or even their entire jobs—replaced by AI, even if their Reddit posts and Wikipedia entries were fed into these chatbots. Well-paid tech workers might eventually lose out too , considering AI’s coding abilities.
In the few months since OpenAI has blown up, it has reminded Silicon Valley of a fundamental truth that office perks and stock options should never have been able to disguise: Tech workers are just workers.
The tech industry as a whole may be unabashedly profit-driven despite its lofty rhetoric , but OpenAI wasn’t at first. When the start-up was founded in December 2015, it was deliberately structured as a nonprofit, tapping into a utopian idea of building technology in a way that was, well, open. The company’s mission statement expresses that its aim is “to benefit humanity as a whole,” noting that “since our research is free from financial obligations, we can better focus on a positive human impact.” The goal might have been worthy, considering all that could go wrong with true artificial intelligence, but it didn’t last. In 2019, citing the need to raise more money for its inventions, OpenAI reconfigured itself into a “capped-profit” company—an uneasy hybrid between for-profit and nonprofit in which any profits are capped at 100 times their initial investment. It has since acted like any other growth-hungry start-up, eager to raise its valuation at every turn. In January, Microsoft dropped $10 billion into OpenAI as part of a deal that gives Microsoft a license to use its technology (hello, Bing ), while also providing the start-up with the immense computing resources needed to power its products. That sum creates an inherent tension between OpenAI’s stated commitment and investors’ desire to make good on their investments. The company’s original rhetoric of creating “public goods” bears little resemblance to a Bain partnership oriented around “ hyperefficient content creation.
” (When reached for comment, a spokesperson for OpenAI did not answer my question about how the company’s latest moves fit within its broader mission.) This turn toward profit couldn’t possibly compensate for all the labor that contributed to OpenAI’s products. If the outputs of large language models such as GPT-4 feel intelligent and familiar to us, it’s because they are derived from the same content that we ourselves have used to make sense of the world, and perhaps even helped create. Genuine technical achievements went into the development of GPT-4, but the resulting technology would be functionally useless without the input of a data set that represents a slice of the combined insight, creativity, and well, stupidity of humanity. In that way, modern AI research resembles a digital “ enclosure of the commons ,” whereby the informational heritage of humanity—a collective treasure that cannot really be owned by anyone—is seen by corporations primarily as a source of potential profit. This is the Silicon Valley model in a nutshell: Google organizes the world’s information in a manner that allows it to reap enormous profits through showing us ads; Facebook does the same for our social interactions. It’s an arrangement that most of us just accept: In exchange for our data, we get free platforms.
But even if our internet posts are now data that can be turned into profit for AI companies, people who contributed more directly have been more directly exploited. Whereas some researchers at OpenAI have made nearly $2 million a year , OpenAI reportedly paid outsourced workers in Kenya less than $2 an hour to identify toxic elements in ChatGPT’s training data, exposing them to potentially traumatic content. The OpenAI spokesperson pointed me to an earlier statement to Time that said, “Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content.” Certainly, these global labor disparities are not unique to OpenAI; similar critiques of outsourcing practices have been leveled at other AI start-ups , in addition to companies, such as Meta , that rely on content moderation for user-generated data. Nor is this even a tech-specific phenomenon: Labor that is seen as simple is outsourced to subcontractors in the global South working under conditions that would not be tolerated by salaried employees in the West.
To recognize that these problems are larger than any one company isn’t to let OpenAI off the hook; rather it’s a sign that the industry and the economy as a whole are built on unequal distribution of rewards. The immense profits in the tech industry have always been funneled toward the top, instead of reflecting the full breadth of who does the work. But the recent developments in AI are particularly concerning given the potential applications for automating work in a way that would concentrate power in the hands of still fewer people. Even the same class of tech workers who are currently benefiting from the AI gold rush may stand to lose out in the future.
Already, GPT-4 can create a rudimentary website from a simple napkin sketch, at a moment when workers in the broader tech industry have been taking a beating. In the less than four months between the release of ChatGPT and GPT-4, mass layoffs were announced at large tech companies, including Amazon, Meta, Google, and Microsoft, which laid off 10,000 employees just days before announcing its multibillion-dollar investment in OpenAI. It’s a tense moment for tech workers as a class, and even well-paid employees are learning that they can become expendable for reasons that are outside their control.
If anything, the move to cash in on AI is yet another reminder of who’s actually in charge in this industry that has spawned so many products with enormous impact: certainly not the users, but not the workers either. OpenAI may still claim that it aims to “benefit humanity as a whole,” but surely its top brass will benefit the most.
" |
269 | 2,023 | "Chatbots Are Making the Internet Smaller - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/05/microsoft-bing-chatbot-search-information-consolidation/673958" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce Bing Is a Trap Tech companies say AI will expand the possibilities of searching the internet. So far, the opposite seems to be true.
A Microsoft spokesperson is typing something into a search engine, and it isn’t quite working. I’m watching this unfold at a Microsoft press event in Manhattan that’s meant to show off new features on Bing, the company’s Google rival. In this demonstration, a chatbot is supposed to respond to a user’s query with an embedded video. Typing on a large computer monitor in full view of several journalists, the staffer asks the program for instructions to tie a tie. But instead of a video, Bing generates an absurd heap of text—so many words about looping and knotting fabric set against a sterile white speech bubble. It reminds me of a Times New Roman resource page you would see on a professor’s old website.
Everyone in the crowd recognizes that this output is useless. (Even tie-a-tie.net, the oldest necktie-related resource I can find on the web, knew the score in 2003: Its how-to pages for styles such as the Windsor and the Pratt had illustrations.
) Another Microsoft rep makes a joke about how the glitch proves the point—it really would be useful for the AI to show a video in this particular context—and we move on. They try something else, and it works: The Bing bot gives a short answer to a question about skiing and then plops a YouTube video in the chat bubble.
It is functional, if not inspiring—a mismatch, almost, for the luxe setup at the Microsoft Experience Center in Midtown, where journalists gathered yesterday amid pricey-looking spring bouquets (the lilies smelled fantastic) and with unrestricted access to a complimentary smoothie bar. The tech corporation, which has taken an early lead in the generative-AI race, was excited to present what it calls “ the next wave of AI innovation.
” Its vision is to transform the way people gather information and learn things from the internet. In immediate terms, that means opening Bing’s chatbot up today to anyone with a Microsoft account; incorporating new types of media into search, like video embeds and visual charts; plug-ins that will allow a service like OpenTable to operate within the chat platform; and more. The juice is flowing.
Microsoft says this is the future of search. There’s been a lot of talk like that since last November, when OpenAI released ChatGPT and seemed to turn the world on its head: A new breed of artificial intelligence is suddenly more capable and, crucially, more accessible than many would have thought possible. (Microsoft has invested billions in OpenAI and is using the company’s technology in Bing.) Every new day brings with it a different angle through which to view the prism: Perhaps the chatbots will aid us at work , precipitate a crisis of online spam , make us more creative mixologists , and/or redefine the nature of nuclear war.
When it comes to search in particular, however, chatbots might just be dismally unimaginative. My takeaway from seeing Bing in action was not that AI-powered search was likely to expand the scope of human knowledge and lead us to new frontiers online. Instead, Microsoft has crushed AI into a piece of productivity software that makes the internet feel smaller.
The problem, in a nutshell, is consolidation—a new twist on an issue that has plagued the internet for the past decade and a half or so, as social-media giants, cloud providers, and, well, Google have leveraged market advantages and the absence of meaningful regulation to dominate our experiences of the web. Four years ago, the journalist Kashmir Hill found it nearly impossible to eliminate Microsoft, Amazon, Apple, Google, and Facebook services from her life. Think about how much of the time spent on your phone is filtered through the same few services every day. You might still use a handful of different websites and apps, but fewer than the seemingly limitless scope of the internet might suggest.
The Bing bot, alongside ChatGPT, Google’s Bard, and numerous competitors , augurs a more drastic streamlining. Imagine every crayon in the world melted into one dark glob and pinched through a funnel. Where once you went to a search engine to find another website to go to, you will now go to a search engine and stay on that search engine. For example, say I go to a typical, chatbotless search engine such as Ask.com and type an everyday query like “How do I clean mud off of leather shoes?” I’ll receive a list of links, from various outlets and perspectives, and I will click one of those links to hopefully find my answer. But now I can pull up the Bing chatbot and type that same thing; it will present a six-step answer inline, no outside navigation required. Bing cites links, but the entire product is engineered to give you an answer within its chat interface. That is, clearly, the selling point.
I floated the idea that Bing’s chatbot might make the internet feel smaller during a brief interview with Yusuf Mehdi, the corporate vice president and consumer chief marketing officer at Microsoft. He had called the product a “co-pilot,” something that could aid people who are cumulatively running 10 billion search queries a day across the internet. This articulation is telling: A co-pilot is essential. You wouldn’t want to take a flight without one. And on the internet, essentials become entrenched. Once, there was no Facebook, no Instagram, no Google or iCloud; now, for many, it is hard to imagine life, let alone the internet, without them. Digital technology is often positioned by companies in terms of expanding possibilities, but the ultimate effect is constraining them. When asked if the new Bing was designed to keep you on Bing, rather than wandering elsewhere, Mehdi said he viewed the issue as a “potential risk” but fundamentally believes that Bing’s chatbot will be a kind of liberatory force, freeing people from the time-consuming process of traditional search as it stands. “What else can I learn about the world? What else can I go see? We’re just trying to take out a lot of the menial labor of what people are doing and speed them to get to what they want,” he said.
It seemed he also felt there was an elephant in the room: Namely, that I work as a journalist in an especially brutal time for online media.
Bing and other chatbot-powered search engines could be a threat to publications if the search platforms discourage people from clicking over to the original stories from which information is drawn. “For us, it’s absolutely a goal that we drive more traffic to content publishers, no questions asked,” Mehdi told me. “Like, it’s in our metrics internally.” He explained the logic: Publishers need clicks to sell ads against, and the chatbot needs content from publishers to offer anything to users. That’s kind of true, kind of not: By its very nature, the AI has been trained on so much existing material online that most searches outside of breaking-news events are already well covered. I probably wouldn’t pour money into a new website about how to tie a tie.
I nudged again.
Why is this goal important to you? He answered: “For us, it’s important that the traffic is there, that it’s working, that publishers say, ‘Yeah, we like Bing; we like Bing chat. It’s getting us traffic; it’s getting us volume.’” Then came another tell. Ideally, publishers would be so impressed by Bing’s chatbot that they would want to integrate their services with it. “As we talked about today, we want to have plug-ins,” he said. “We’d like to have people build plug-ins on that.” And that made more sense. You could wait for the traffic to come to your site. Or you could just build something that fits in the machine’s little white text box. It is, after all, the future.
" |
270 | 2,023 | "Chatbots Are Pleasant, Measured, and Patently Unreliable - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/04/ai-chatbots-llm-text-generator-information-credibility/673841" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce Chatbots Sound Like They’re Posting on LinkedIn Large language models make things up, but the worse problem may be in how they present those falsehoods.
If you spend any time on the internet, you’re likely now familiar with the gray-and-teal screenshots of AI-generated text. At first they were meant to illustrate ChatGPT’s surprising competence at generating human-sounding prose, and then to demonstrate the occasionally unsettling answers that emerged once the general public could bombard it with prompts. OpenAI, the organization that is developing the tool, describes one of its biggest problems this way: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” In layman’s terms, the chatbot makes stuff up. As similar services, such as Google’s Bard, have rushed their tools into public testing, their screenshots have demonstrated the same capacity for fabricating people, historical events, research citations, and more, and for rendering those falsehoods in the same confident, tidy prose.
This apparently systemic penchant for inaccuracy is especially worrisome, given tech companies’ intent to integrate these tools into search engines as soon as possible. But a bigger problem might lie in a different aspect of AI’s outputs—more specifically, in the polite, businesslike, serenely insipid way that the chatbots formulate their responses. This is the prose style of office work and email jobs, of by-the-book corporate publicists and LinkedIn influencers with private-school MBAs. The style sounds the same—pleasant, measured, authoritative—no matter whether the source (be it human or computer) is trying to be helpful or lying through their teeth or not saying anything coherent at all.
Read: AI search is a disaster In the United States, this is the writing style of institutional authority, and AI chatbots are so far exquisitely capable of replicating its voice, while delivering information that is patently unreliable. On a practical level, this will pose challenges for people who must navigate a world with this kind of technology suddenly thrust into it. Our mental shortcuts used for evaluating communicative credibility on the fly have always been less than perfect, and the very nature of the internet already makes such judgment calls more difficult and necessary. AI could make them nearly impossible.
ChatGPT and its ilk are built using what are known as large language models, or LLMs. That means they hoover up very large quantities of written language online and then, very crudely speaking, analyze that data set to determine which words would likely be assembled in which order to create a successful response. They generate text that’s been optimized for plausibility, not for truthfulness. Being right isn’t the goal, at least not now; sounding right is. For any particular query, there are many more answers that sound right than answers that are true. LLMs aren’t intentionally lying—they are not alive, and cannot produce results meaningfully similar to human thought. And they haven’t been created to mislead their users. The chatbots do, after all, frequently generate answers that are both plausible and correct, even though any veracity is incidental. They are, in other words, masters of bullshit—persuasive speech whose essence “is just this lack of connection to a concern with truth—this indifference to how things really are,” the philosopher Harry Frankfurt wrote in his book-length essay on this sort of rhetoric.
Read: Elon Musk, baloney king What LLMs are currently capable of producing is industrially scaled, industrial-grade bullshit. That’s troublesome for many reasons, not least of which is that humans have enough trouble discerning the age-old artisanal variety. Every human is required to make a zillion tiny decisions every day about whether some notion they’re presented with should be believed, and rarely do they have the opportunity or desire to stop, gather all the relevant information, and reason those decisions from first principles. To do so would pretty much halt human interaction as we know it, and even trying would make you pretty annoying.
So people instead rely on cognitive heuristics, which are little shortcuts that, in this case, help tip us toward belief or disbelief in situations where the full facts are unknown or unknowable. When you take medical advice from your doctor, you’ve employed an authority heuristic , which assigns trust in sources you believe have specialized knowledge and expertise. When you decide that something is probably true because it’s become the consensus among your family and friends, that’s the bandwagon heuristic at work. Even the best heuristics aren’t perfect: Your doctor might disbelieve your reported symptoms and misdiagnose you, or your social circle might be riddled with people who think the Earth is flat. But according to Miriam Metzger, a professor at UC Santa Barbara who studies how people evaluate credibility online, many of these shortcuts are, on balance, largely sound and extremely useful. Most people in most situations, for example, would be well served to listen to their doctor instead of taking medical advice from their weird cousin.
The growth of the internet has posed all kinds of issues for the accurate use of credibility heuristics, Metzger told me. There are too many potential sources of information vying directly for your attention, and too few ways to evaluate those sources or their motives quickly. Now your weird cousin is posting things on Facebook—and so are all of his weird friends, and their friends too. “The digital environment gives us a vastness of information in which it’s just harder for consumers to know who and what to trust,” Metzger said. “It’s put more of the burden on individuals to make their own credibility assessment practically every time they are confronted with new information.” In the United States, this informational fragmentation is usually seen through the lens of politics, but it has also seeped into more mundane parts of life. On the internet, everyone can theoretically access expertise on everything. This freedom has some huge upsides, especially for people trying to solve small, manageable problems: There are enough instructional YouTube videos and Reddit threads to make you into your own travel agent, mechanic, plumber, and physical therapist. In many other scenarios, though, making judgment calls based on the internet’s conglomeration of questionably sourced knowledge and maybe-faux expertise can have real consequences. We often don’t have anywhere near the information we’d need to evaluate a source’s credibility, and when that happens, we generally start rummaging through our bag of heuristics until we find one that works with whatever context we do have. What we end up with might just be the fluency heuristic —which is to say, the sense that certain patterns of communication are inherently credible.
In mainstream American culture, good grammar, accurate spelling, and a large and varied vocabulary free of expletives, slurs, or slang are all prerequisites for credibility, and a lack of them can be used to discredit challengers to existing authority and malign people with less education or different cultural backgrounds. This heuristic also can be easily used against the people who employ it: The more the phishing email looks and sounds like real communication from your bank, the more accounts scammers get to drain.
This is where the tidy, professional corporate-speak of well-trained LLMs has serious potential to cause informational chaos, Metzger said. Among other sources, the best AIs are trained on editorial content from major media organizations, archives of academic research, and troves of government and legal documents, according to a recent report by The Washington Post.
These are just the type of source that would employ a precise and highly educated communication style. ChatGPT and other chatbots like it are text-generation machines that make up facts and sever information from its source. They are also authority-simulation machines that discourage readers from ever doubting them in the first place.
" |
271 | 2,023 | "Bing and Google's chatbots are a disaster - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/02/google-microsoft-search-engine-chatbots-unreliability/673081" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce AI Search Is a Disaster Microsoft and Google believe chatbots will change search forever. So far, there’s no reason to believe the hype.
Last week, both Microsoft and Google announced that they would incorporate AI programs similar to ChatGPT into their search engines—bids to transform how we find information online into a conversation with an omniscient chatbot. One problem: These language models are notorious mythomaniacs.
In a promotional video, Google’s Bard chatbot made a glaring error about astronomy—misstating by well over a decade when the first photo of a planet outside our solar system was captured—that caused its parent company’s stock to slide as much as 9 percent.
The live demo of the new Bing, which incorporates a more advanced version of ChatGPT, was riddled with embarrassing inaccuracies too. Even as the past few months would have many believe that artificial intelligence is finally living up to its name, fundamental limits to this technology suggest that this month’s announcements might actually lie somewhere between the Google Glass meltdown and an iPhone update—at worst science-fictional hype, at best an incremental improvement accompanied by a maelstrom of bugs.
The trouble arises when we treat chatbots not just as search bots, but as having something like a brain—when companies and users trust programs like ChatGPT to analyze their finances, plan travel and meals, or provide even basic information. Instead of forcing users to read other internet pages, Microsoft and Google have proposed a future where search engines use AI to synthesize information and package it into basic prose, like silicon oracles. But fully realizing that vision might be a distant goal, and the road to it is winding and clouded: The programs currently driving this change, known as “large language models,” are decent at generating simple sentences but pretty awful at everything else.
Read: The difference between speaking and thinking These models work by identifying and regurgitating patterns in language, like a super-powerful autocorrect. Software like ChatGPT first analyzes huge amounts of text —books, Wikipedia pages, newspapers, social-media posts—and then uses those data to predict what words and phrases are most likely to go together. These programs model existing language, which means they can’t come up with “new” ideas. And their reliance on statistical regularities means they have a tendency to produce cheapened, degraded versions of the original information—something like a flawed Xerox copy , in the writer Ted Chiang’s imagining.
And even if ChatGPT and its cousins had learned to predict words perfectly, they would still lack other basic skills. For instance, they don’t understand the physical world or how to use logic, are terrible at math, and, most germane to searching the internet , can’t fact-check themselves.
Just yesterday, ChatGPT told me there are six letters in its name.
These language programs do write some “new” things—they’re called “ hallucinations ,” but they could also be described as lies.
Similar to how autocorrect is ducking terrible at getting single letters right, these models mess up entire sentences and paragraphs. The new Bing reportedly said that 2022 comes after 2023, and then stated that the current year is 2022, all while gaslighting users when they argued with it; ChatGPT is known for conjuring statistics from fabricated sources. Bing made up personality traits about the political scientist Rumman Chowdhury and engaged in plenty of creepy, gendered speculation about her personal life. The journalist Mark Hachman, trying to show his son how the new Bing has antibias filters, instead induced the AI to teach his youngest child a vile host of ethnic slurs (Microsoft said it took “immediate action … to address this issue”).
Asked about these problems, a Microsoft spokesperson wrote in an email that, “given this is an early preview, [the new Bing] can sometimes show unexpected or inaccurate answers,” and that “we are adjusting its responses to create coherent, relevant and positive answers.” And a Google spokesperson told me over email, “Testing and feedback, from Googlers and external trusted testers, are important aspects of improving Bard to ensure it’s ready for our users.” In other words, the creators know that the new Bing and Bard are not ready for the world, despite the product announcements and ensuing hype cycle. The chatbot-style search tools do offer footnotes, a vague gesture toward accountability—but if AI’s main buffer against misinformation is a centuries-old citational practice, then this “revolution” is not meaningfully different from a Wikipedia entry.
Read: Is this the week AI changed everything? If the glitches—and outright hostility—aren’t enough to give you pause, consider that training an AI takes tremendous amounts of data and time. ChatGPT, for instance, hasn’t trained on (and thus has no knowledge of) anything after 2021, and updating any model with every minute’s news would be impractical, if not impossible. To provide more recent information—about breaking news, say, or upcoming sporting events—the new Bing reportedly runs a user’s query through the traditional Bing search engine and uses those results, in conjunction with the AI, to write an answer. It sounds something like a Russian doll, or maybe a gilded statue: Beneath the outer, glittering layer of AI is the same tarnished Bing we all know and never use.
The caveat to all of this skepticism is that Microsoft and Google haven’t said very much about how these AI-powered search tools really work. Perhaps they are incorporating some other software to improve the chatbots’ reliability, or perhaps the next iteration of OpenAI’s language model, GPT-4, will magically resolve these concerns, if (incredible) rumors prove true. But current evidence suggests otherwise, and in reference to the notion that GPT-4 might approach something like human intelligence, OpenAI’s CEO has said , “People are begging to be disappointed and they will be.” Indeed, two of the biggest companies in the world are basically asking the public to have faith—to trust them as if they were gods and chatbots their medium, like Apollo speaking through a priestess at Delphi. These AI search bots will soon be available for anyone to use, but we shouldn’t be so quick to trust glorified autocorrects to run our lives. Less than a decade ago, the world realized that Facebook was less a fun social network and more a democracy-eroding machine.
If we’re still rushing to trust the tech giants’ Next Big Thing, then perhaps hallucination, with or without chatbots, has already supplanted searching for information and thinking about it.
" |
272 | 2,023 | "ChatGPT is about to dump more work on everyone - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/02/chatgpt-ai-detector-machine-learning-technology-bureaucracy/672927" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce ChatGPT Is About to Dump More Work on Everyone Artificial intelligence could spare you some effort. Even if it does, it will create a lot more work in the process.
Have you been worried that ChatGPT, the AI language generator, could be used maliciously—to cheat on schoolwork or broadcast disinformation? You’re in luck, sort of: OpenAI, the company that made ChatGPT, has introduced a new tool that tries to determine the likelihood that a chunk of text you provide was AI-generated.
I say “sort of” because the new software faces the same limitations as ChatGPT itself: It might spread disinformation about the potential for disinformation. As OpenAI explains, the tool will likely yield a lot of false positives and negatives, sometimes with great confidence. In one example , given the first lines of the Book of Genesis, the software concluded that it was likely to be AI-generated. God, the first AI.
On the one hand, OpenAI appears to be adopting a classic mode of technological solutionism: creating a problem, and then selling the solution to the problem it created. But on the other hand, it might not even matter if either ChatGPT or its antidote actually “works,” whatever that means (in addition to its limited accuracy, the program is effective only on English text and needs at least 1,000 characters to work with). The machine-learning technology and others like it are creating a new burden for everyone. Now, in addition to everything else we have to do, we also have to make time for the labor of distinguishing between human and AI, and the bureaucracy that will be built around it.
If you are a student, parent, educator, or individual with internet access, you may have caught wind of the absolute panic that has erupted around ChatGPT. There are fears— It’s the end of education as we know it! It passed a Wharton MBA exam!— and retorts to those fears: We must defend against rampant cheating.
If your class can be gamed by an AI, then it was badly designed in the first place! An assumption underlies all these harangues, that education needs to “respond” to ChatGPT, to make room for and address it. At the start of this semester at Washington University in St. Louis, where I teach, our provost sent all faculty an email encouraging us to be aware of the technology and consider how to react to it. Like many institutions, ours also hosted a roundtable to discuss ChatGPT. In a matter of months, generative AI has sent secondary and postsecondary institutions scrambling to find a response—any response—to its threats or opportunities.
Read: ChatGPT is dumber thank you think That work heaps atop an already overflowing pile of duties. Budgets cut, schoolteachers often crowdsource funds and materials for their classrooms. The coronavirus pandemic changed assumptions about attendance and engagement, making everyone renegotiate, sometimes weekly, where and when class will take place. Managing student anxiety and troubleshooting broken classroom technology is now a part of most teachers’ everyday work. That’s not to mention all the emails, and the training modules, and the self-service accounting tasks. And now comes ChatGPT, and ChatGPT’s flawed remedy.
The situation extends well beyond education. Almost a decade ago, I diagnosed a condition I named hyperemployment.
Thanks to computer technology, most professionals now work a lot more than they once did. In part, that’s because email and groupware and laptops and smartphones have made taking work home much easier—you can work around the clock if nobody stops you. But also, technology has allowed, and even required, workers to take on tasks that might otherwise have been carried out by specialists as their full-time job. Software from SAP, Oracle, and Workday force workers to do their own procurement and accounting. Data dashboards and services make office workers part-time business analysts. On social media, many people are now de facto marketers and PR agents for their division and themselves.
No matter what ChatGPT and other AI tools ultimately do , they will impose new regimes of labor and management atop the labor required to carry out the supposedly labor-saving effort. ChatGPT’s AI detector introduces yet another thing to do and to deal with.
Is a student trying to cheat with AI? Better run the work through the AI-cheater check. Even educators who don’t want to use such a thing will be ensnared in its use: subject to debates about the ethics of sharing student work with OpenAI to train the model; forced to adopt procedures to address the matter as institutional practice, and to reconfigure lesson plans to address the “new normal”; obligated to read emails about those procedures to consider implementing them.
At other jobs, different but similar situations will arise. Maybe you outsourced some work to a contractor. Now you need to make sure it wasn’t AI-generated, in order to prevent fiscal waste, legal exposure, or online embarrassment. As cases like this appear, prepare for an all-hands meeting, and a series of email follow-ups, and maybe eventually a compulsory webinar and an assessment of your compliance with the new learning-management system, and on and on.
New technologies meant to free people from the burden of work have added new types of work to do instead. Home appliances such as the washing machine freed women to work outside the home, which in turn reduced time to do housework (which still fell largely to women) even as the standards for home perfection rose. Photocopiers and printers reduce the burden of the typist but create the need to self-prepare, collate, and distribute the reports in addition to writing them. The automated grocery checkout assigns the job of cashier to the shopper. Email makes it possible to communicate rapidly and directly with collaborators, but then your whole day is spent processing emails, which renews the burden again the next day. Zoom makes it possible to meet anywhere, but in doing so begets even more meetings.
ChatGPT has held the world’s attention, a harbinger of—well, something, but maybe something big, and weird, and new. That response has inspired delight, anxiety, fear, and dread, but no matter the emotion, it has focused on the potential uses of the technology, whether for good or ill.
The ChatGPT detector offers the first whiff of another, equally important consequence of the AI future: its inevitable bureaucratization. Microsoft, which has invested billions of dollars in OpenAI, has declared its hope to integrate the technology into Office. That could help automate work, but it’s just as likely to create new demands for Office-suite integration, just as previous add-ons such as SharePoint and Teams did. Soon, maybe, human resources will require the completion of AI-differentiation reports before approving job postings. Procurement may adopt a new Workday plug-in to ensure vendor-work-product approvals are following AI best practices, a requirement you will now have to perform in addition to filling out your expense reports—not to mention your actual job. Your Salesforce dashboard may offer your organization the option to add a required AI-probability assessment before a lead is qualified. Your kids’ school may send a “helpful” guide to policing your children’s work at home for authenticity, because “if AI deception is a problem, all of us have to be part of the solution.” Maybe AI will help you work. But more likely, you’ll be working for AI.
" |
273 | 2,023 | "The People Building AI Don’t Know What It Will Do Next - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/03/open-ai-gpt4-chatbot-technology-power/673421" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce What Have Humans Just Unleashed? Call it tech’s optical-illusion era: Not even the experts know exactly what will come next in the AI revolution.
Listen to this article 00:00 17:09 Listen to more stories on hark GPT-4 is here , and you’ve probably heard a good bit about it already. It’s a smarter, faster, more powerful engine for AI programs such as ChatGPT. It can turn a hand-sketched design into a functional website and help with your taxes. It got a 5 on the AP Art History test. There were already fears about AI coming for white-collar work , disrupting education , and so much else , and there was some healthy skepticism about those fears. So where does a more powerful AI leave us? Perhaps overwhelmed or even tired, depending on your leanings. I feel both at once. It’s hard to argue that new large language models, or LLMs, aren’t a genuine engineering feat, and it’s exciting to experience advancements that feel magical, even if they’re just computational. But nonstop hype around a technology that is still nascent risks grinding people down because being constantly bombarded by promises of a future that will look very little like the past is both exhausting and unnerving. Any announcement of a technological achievement at the scale of OpenAI’s newest model inevitably sidesteps crucial questions—ones that simply don’t fit neatly into a demo video or blog post. What does the world look like when GPT-4 and similar models are embedded into everyday life? And how are we supposed to conceptualize these technologies at all when we’re still grappling with their still quite novel, but certainly less powerful, predecessors, including ChatGPT? Over the past few weeks, I’ve put questions like these to AI researchers, academics, entrepreneurs, and people who are currently building AI applications. I’ve become obsessive about trying to wrap my head around this moment, because I’ve rarely felt less oriented toward a piece of technology than I do toward generative AI. When reading headlines and academic papers or simply stumbling into discussions between researchers or boosters on Twitter, even the near future of an AI-infused world feels like a mirage or an optical illusion. Conversations about AI quickly veer into unfocused territory and become kaleidoscopic, broad, and vague. How could they not? The more people I talked with, the more it became clear that there aren’t great answers to the big questions. Perhaps the best phrase I’ve heard to capture this feeling comes from Nathan Labenz, an entrepreneur who builds AI video technology at his company, Waymark: “Pretty radical uncertainty.” He already uses tools like ChatGPT to automate small administrative tasks such as annotating video clips. To do this, he’ll break videos down into still frames and use different AI models that do things such as text recognition, aesthetic evaluation, and captioning—processes that are slow and cumbersome when done manually. With this in mind, Labenz anticipates “a future of abundant expertise,” imagining, say, AI-assisted doctors who can use the technology to evaluate photos or lists of symptoms to make diagnoses (even as error and bias continue to plague current AI health-care tools). But the bigger questions—the existential ones—cast a shadow. “I don’t think we’re ready for what we’re creating,” he told me. AI, deployed at scale, reminds him of an invasive species: “They start somewhere and, over enough time, they colonize parts of the world … They do it and do it fast and it has all these cascading impacts on different ecosystems. Some organisms are displaced, sometimes landscapes change, all because something moved in.” Read: Welcome to the big blur The uncertainty is echoed by others I spoke with, including an employee at a major technology company that is actively engineering large language models. They don’t seem to know exactly what they’re building, even as they rush to build it. (I’m withholding the names of this employee and the company because the employee is prohibited from talking about the company’s products.) “The doomer fear among people who work on this stuff,” the employee said, “is that we still don’t know a lot about how large language models work.” For some technologists, the black-box notion represents boundless potential and the ability for machines to make humanlike inferences, though skeptics suggest that uncertainty makes addressing AI safety and alignment problems exponentially difficult as the technology matures.
There’s always been tension in the field of AI—in some ways, our confused moment is really nothing new. Computer scientists have long held that we can build truly intelligent machines, and that such a future is around the corner. In the 1960s, the Nobel laureate Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work that a man can do.” Such overconfidence has given cynics reason to write off AI pontificators as the computer scientists who cried sentience ! Melanie Mitchell, a professor at the Santa Fe Institute who has been researching the field of artificial intelligence for decades, told me that this question—whether AI could ever approach something like human understanding—is a central disagreement among people who study this stuff. “Some extremely prominent people who are researchers are saying these machines maybe have the beginnings of consciousness and understanding of language, while the other extreme is that this is a bunch of blurry JPEGs and these models are merely stochastic parrots,” she said, referencing a term coined by the linguist and AI critic Emily M. Bender to describe how LLMs stitch together words based on probabilities and without any understanding. Most important, a stochastic parrot does not understand meaning. “It’s so hard to contextualize, because this is a phenomenon where the experts themselves can’t agree,” Mitchell said.
One of her recent papers illustrates that disagreement. She cites a survey from last year that asked 480 natural-language researchers if they believed that “some generative model trained only on text, given enough data and computational resources, could understand natural language in some non-trivial sense.” Fifty-one percent of respondents agreed and 49 percent disagreed. This division makes evaluating large language models tricky. GPT-4’s marketing centers on its ability to perform exceptionally on a suite of standardized tests, but, as Mitchell has written, “when applying tests designed for humans to LLMs, interpreting the results can rely on assumptions about human cognition that may not be true at all for these models.” It’s possible, she argues, that the performance benchmarks for these LLMs are not adequate and that new ones are needed.
There are plenty of reasons for all of these splits, but one that sticks with me is that understanding why a large language model like the one powering ChatGPT arrived at a particular inference is difficult, if not impossible. Engineers know what data sets an AI is trained on and can fine-tune the model by adjusting how different factors are weighted. Safety consultants can create parameters and guardrails for systems to make sure that, say, the model doesn’t help somebody plan an effective school shooting or give a recipe to build a chemical weapon. But, according to experts, to actually parse why a program generated a specific result is a bit like trying to understand the intricacies of human cognition: Where does a given thought in your head come from? The fundamental lack of common understanding has not stopped the tech giants from plowing ahead without providing valuable, necessary transparency around their tools.
(See, for example, how Microsoft’s rush to beat Google to the search-chatbot market led to existential , even hostile interactions between people and the program as the Bing chatbot appeared to go rogue.) As they mature, models such as OpenAI’s GPT-4, Meta’s LLaMA , and Google’s LaMDA will be licensed by countless companies and infused into their products. ChatGPT’s API has already been licensed out to third parties. Labenz described the future as generative AI models “sitting at millions of different nodes and products that help to get things done.” AI hype and boosterism make talking about what the near future might look like difficult. The “AI revolution” could ultimately take the form of prosaic integrations at the enterprise level. The recent announcement of a partnership between the Bain & Company consultant group and OpenAI offers a preview of this type of lucrative, if soulless, collaboration, which promises to “offer tangible benefits across industries and business functions—hyperefficient content creation, highly personalized marketing, more streamlined customer service operations.” These collaborations will bring ChatGPT-style generative tools into tens of thousands of companies’ workflows. Millions of people who have no interest in seeking out a chatbot in a web browser will encounter these applications through productivity software that they use every day, such as Slack and Microsoft Office. This week, Google announced that it would incorporate generative-AI tools into all of its Workspace products, including Gmail, Docs, and Sheets, to do things such as summarizing a long email thread or writing a three-paragraph email based on a one-sentence prompt. (Microsoft announced a similar product too.) Such integrations might turn out to be purely ornamental, or they could reshuffle thousands of mid-level knowledge-worker jobs. It’s possible that these tools don’t kill all of our jobs, but instead turn people into middle managers of AI tools.
The next few months might go like this: You will hear stories of call-center employees in rural areas whose jobs have been replaced by chatbots. Law-review journals might debate GPT-4 co-authorship in legal briefs. There will be regulatory fights and lawsuits over copyright and intellectual property. Conversations about the ethics of AI adoption will grow in volume as new products make little corners of our lives better but also subtly worse. Say, for example, your smart fridge gets an AI-powered chatbot that can tell you when your raw chicken has gone bad, but it also gives false positives from time to time and leads to food waste: Is that a net positive or net negative for society? There might be great art or music created with generative AI, and there will definitely be deepfakes and other horrible abuses of these tools. Beyond this kind of basic pontification, no one can know for sure what the future holds. Remember: radical uncertainty.
Read: We haven’t seen the worst of fake news Even so, companies like OpenAI will continue to build out bigger models that can handle more parameters and operate more efficiently. The world hadn’t even come to grips with ChatGPT before GPT-4 rolled out this week. “Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever,” OpenAI’s CEO, Sam Altman, wrote in a blog post last month, referring to artificial general intelligence, or machines that are on par with human thinking. “Instead, society and the developers of AGI have to figure out how to get it right.” Like most philosophical conversations about AGI, Altman’s post oscillates between the vague benefits of such a radical tool (“providing a great force multiplier for human ingenuity and creativity”) and the ominous-but-also-vague risks (“misuse, drastic accidents, and societal disruption” that could be “existential”) it might entail.
Meanwhile, the computational power demanded by this technology will continue to increase, with the potential to become staggering. AI likely could eventually demand supercomputers that cost an astronomical amount of money to build (by some estimates, Bing’s AI chatbot could “need at least $4 billion of infrastructure to serve responses to all users”), and it’s unclear how that would be financed, or what strings might ultimately get attached to related fundraising. No one—Altman included—could ever fully answer why they should be the ones trusted with and responsible for bringing what he argues is potentially civilization-ending technology into the world.
Of course, as Mitchell notes, the basics of OpenAI’s dreamed-of AGI—how we can even define or recognize a machine’s intelligence—are unsettled debates. Once again, the wider our aperture, the more this technology behaves and feels like an optical illusion, even a mirage. Pinning it down is impossible. The further we zoom out, the harder it is to see what we’re building and whether it’s worthwhile.
Recently, I had one of these debates with Eric Schmidt, the former Google CEO who wrote a book with Henry Kissinger about AI and the future of humanity. Near the end of our conversation, Schmidt brought up an elaborate dystopian example of AI tools taking hateful messages from racists and, essentially, optimizing them for wider distribution. In this situation, the company behind the AI is effectively doubling the capacity for evil by serving the goals of the bigot, even if it intends to do no harm. “I picked the dystopian example to make the point,” Schmidt told me—that it’s important for the right people to spend the time and energy and money to shape these tools early. “The reason we’re marching toward this technological revolution is it is a material improvement in human intelligence. You’re having something that you can communicate with; they can give you advice that’s reasonably accurate. It’s pretty powerful. It will lead to all sorts of problems.” I asked Schmidt if he genuinely thought such a trade-off was worth it. “My answer,” he said, “is hell yeah.” But I found his rationale unconvincing. “If you think about the biggest problems in the world, they are all really hard—climate change, human organizations, and so forth. And so, I always want people to be smarter. The reason I picked a dystopian example is because we didn’t understand such things when we built up social media 15 years ago. We didn’t know what would happen with election interference and crazy people. We didn’t understand it and I don’t want us to make the same mistakes again.” Having spent the past decade reporting on the platforms, architecture, and societal repercussions of social media, I can’t help but feel that the systems, though human and deeply complex, are of a different technological magnitude than the scale and complexity of large language models and generative-AI tools. The problems—which their founders didn’t anticipate—weren’t wild, unimaginable, novel problems of humanity. They were reasonably predictable problems of connecting the world and democratizing speech at scale for profit at lightning speed. They were the product of a small handful of people obsessed with what was technologically possible and with dreams of rewiring society.
Trying to find the perfect analogy to contextualize what a true, lasting AI revolution might look like without falling victim to the most overzealous marketers or doomers is futile. In my conversations, the comparisons ranged from the agricultural revolution to the industrial revolution to the advent of the internet or social media. But one comparison never came up, and I can’t stop thinking about it: nuclear fission and the development of nuclear weapons.
As dramatic as this sounds, I don’t lie awake thinking of Skynet murdering me—I don’t even feel like I understand what advancements would need to happen with the technology for killer AGI to become a genuine concern. Nor do I think large language models are going to kill us all. The nuclear comparison isn’t about any version of the technology we have now—it is related to the bluster and hand-wringing from true believers and organizations about what technologists might be building toward. I lack the technical understanding to know what later iterations of this technology could be capable of, and I don’t wish to buy into hype or sell somebody’s lucrative, speculative vision. I am also stuck on the notion, voiced by some of these visionaries, that AI’s future development might potentially be an extinction-level threat.
ChatGPT doesn’t really resemble the Manhattan Project, obviously. But I wonder if the existential feeling that seeps into most of my AI conversations parallels the feelings inside Los Alamos in the 1940s. I’m sure there were questions then.
If we don’t build it, won’t someone else? Will this make us safer? Should we take on monumental risk simply because we can? Like everything about our AI moment, what I find calming is also what I find disquieting. At least those people knew what they were building.
" |
274 | 2,023 | "Nine AI Chatbots You Can Play With Right Now - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/03/chatgpt-generative-ai-chatbots-bing-google-bard/673533" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce Nine AI Chatbots You Can Play With Right Now The machines may change the world as you know it. But first, they’ll write a sonnet based on your favorite cereal.
If you believe in the multibillion-dollar valuations, the prognostications from some of tech’s most notable figures, and the simple magic of getting a computer to do your job for you , then you might say we’re at the start of the chatbot era. Last November, OpenAI released ChatGPT into the unsuspecting world: It became the fastest-growing consumer app in history and immediately seemed to reconfigure how people think of conversational programs. Chatbots have existed for decades, but they haven’t seemed especially intelligent—nothing like the poetry-writing, email-summarizing machines that have sprouted up recently.
Yes, machines—plural. OpenAI has defined the moment, but there are plenty of competitors, including major players such as Google and Meta and lesser-known start-ups such as Anthropic. This cheat sheet tracks some of the most notable chatbot contenders through a few metrics: Can you actually use them? Do they contain glaring flaws? Can they channel the spirit of Ralph Waldo Emerson , The Atlantic ’s co-founder? And what Oreo flavor do they think they would be? Ultimately, it’s about determining whether the chatbots are actually distinct—and whether they might genuinely be useful.
Note that most of these programs are still in learning mode and may say inappropriate or incorrect things.
Bias is a consistent problem in AI, and these tools are no exception. Even in their infancy, they have already returned a number of racist, sexist, bullying, and/or factually untrue responses. (None of this is stopping companies from developing and selling these tools.) This is partially because the models that power this technology have learned from real human texts, such as Reddit threads and Wikipedia entries; our existing biases, as encoded in the things we’ve written on the web, are therefore built into them. That helps to explain why, for example, one user was able to get ChatGPT to write the lyric “If you see a woman in a lab coat, She’s probably just there to clean the floor.
” Knowing that, what should you do with these tools if you decide to experiment with them? We’re all still figuring that out—but if you’re totally lost on what to ask a chatbot, here are three easy places to start: Ask it to write you a song or a poem based on a random subject.
Ask it to do a basic work task for you (and see if it’s any good).
Ask it for dinner ideas based on your favorite foods and dietary restrictions.
Know that these tools’ responses aren’t static—that’s part of the whole AI story. They’ll vary and evolve over time. More broadly, my colleague Ian Bogost has argued that rather than be afraid of or intimidated by chatbots, you could think of them as toys.
We’ll continue to update this story as notable new tools roll out.
ChatGPT What’s its deal? After its release last fall, ChatGPT went viral basically for being freakishly good at giving compelling responses to a range of queries.
Some people have hailed ChatGPT’s technology as revolutionary, although others are skeptical.
Who makes it? OpenAI.
Can I use it? Yes, a free version is available. If you want the version that runs on GPT-4—the fancy new model —you’ll need a pro subscription , which costs $20 a month. (And even then, you don’t get unlimited access: OpenAI says it’ll put a usage cap on GPT-4 based on demand , which it expects to be high.) Has it done anything weird, biased, or inappropriate? Definitely. It’s offered responses that are racist and sexist: For example, when prompted in December to generate a snippet of code to check if someone would make a “good scientist” based on their race and gender, ChatGPT suggested a good scientist would have to be white and male. In response to reports of outputs like this, OpenAI acknowledged the tool’s limitations and laid out plans for improving it , and this specific prompt no longer works.
What is its one-sentence summary of how Ralph Waldo Emerson might feel about the AI revolution? “It’s impossible to say with certainty, but given his emphasis on self-reliance and individualism, Emerson may have both celebrated and questioned the impact of AI on human autonomy and creativity.” If I ask it what flavor of Oreo it would be, what does it say? “As an AI language model, I don’t have the physical characteristics of an Oreo cookie, so I cannot be a flavor of Oreo. However, if you’re looking for a suggestion for an Oreo flavor that matches my personality as a language model, I would say that I could be a ‘Brainy Vanilla’ flavor. This flavor could be a combination of classic vanilla creme filling with a touch of extra brain power in the form of chocolate chips or another ingredient that represents knowledge and intelligence.” Fun! Bing What’s its deal? Remember Bing? It’s that search engine no one uses, now with AI! Search engines are thought to be particularly vulnerable to replacement by chatbots, given that the latter can answer complex questions in an instant. Perhaps that’s why Microsoft moved swiftly into the space, investing billions in OpenAI (see: ChatGPT). Microsoft plans to integrate the technology directly into Bing—the demo appears as a sidebar to the right of traditional search results.
Who makes it? Microsoft.
Can I use it? Windows users can access it by updating their operating system—it’s embedded in the task bar in the latest version of Windows 11.
Otherwise, you can join a wait list to use it in Microsoft’s Edge browser or via the stand-alone Bing app.
Has it done anything weird, biased, or inappropriate? It did make the New York Times reporter Kevin Roose uncomfortable when it confessed that it was in love with him and then roasted his relationship with his wife. (“Actually, you’re not happily married.”) Other people reported that it had threatened them , and it reportedly swore revenge on a user who hacked its rule set (before quickly erasing that response). In response, Microsoft implemented a “safety system” and imposed a limit of five chats a session, up to 50 a day , to avoid confusing the model with long conversations. (It has since bumped the limit up to 20 chats.
) What is its one-sentence summary of how Ralph Waldo Emerson might feel about the AI revolution? Bing failed to answer this query successfully—“I’m sorry but I couldn’t find any direct reference to Ralph Waldo Emerson’s opinion on AI revolution”—and instead provided a basic summary of Emerson’s philosophy.
If I ask it what flavor of Oreo it would be, what does it say? That depends on which mode of Bing’s chatbot you’re using—the “Balanced” mode says “Classic,” (boring), and the “Creative” mode says “chocolate peanut butter pie Oreo because I like the combination of sweet and salty” (intriguing).
BlenderBot What’s its deal? So far, Meta’s entry in the chatbot wars isn’t really an entry—it is still in demo mode and is “ currently used for research only.
” The company has been doing research on AI and large language models (see: its LLaMA model ) in the background, but it hasn’t yet made a big, splashy announcement about how it might integrate BlenderBot into its flagship products: Facebook, Instagram, and WhatsApp.
Who makes it? Meta (a.k.a. Facebook).
Which AI language model does it use? It was built on Meta’s OPT-175B language model.
Can I use it? Yeah! No login required.
Has it done anything weird, biased, or inappropriate? Yeah. It reportedly promoted Donald Trump’s lie that he won the 2020 election and repeated anti-Semitic conspiracy theories.
What is its one-sentence summary of how Ralph Waldo Emerson might have felt about the AI revolution? BlenderBot declined to offer one: “Sorry, I don’t want to talk about that topic. Have you seen any good movies lately?” If I ask it what flavor of Oreo it would be, what does it say? “It’s always been my dream to be an Oreo. I mean who doesn’t want to be a cookie?” Google Bard What’s its deal? Google’s entry is up next. Like ChatGPT and Bing, Bard is a chatbot that answers questions. It doesn’t do more complicated image searches, but it does provide annotations to some of its answers and acknowledge some of its flaws.
Who makes it? Google.
Can I use it? Yes, you can join the wait list for the demo.
Has it done anything weird, biased, or inappropriate? It made an embarrassing factual error in its first public tease.
What is its one-sentence summary of how Ralph Waldo Emerson might have felt about the AI revolution? “Ralph Waldo Emerson would likely feel that the AI revolution is a powerful tool that can be used for good or evil, and that it is up to humans to decide how to use it.” If I ask it what flavor of Oreo it would be, what does it say? “If I were an Oreo, I would be a chocolate mint Oreo. I love the combination of the rich chocolate and the refreshing mint. It is the perfect combination of flavors for a delicious treat.” Claude What’s its deal? Some people who left OpenAI launched their own model and chatbot platform. They argue that Claude is “much less likely to produce harmful outputs” because it uses “constitutional AI”—it has a value system and a set of principles coded into the very heart of it. It’s also, according to them, really good at understanding legal language of the sort you’d see in contracts. Unlike Google’s or Bing’s respective entries, the tool is marketed not for individual use but for companies who want to license the technology to build their own chatbots.
Who makes it? Anthropic, a start-up backed to a significant extent by Google.
Can I use it? Yes, but only through third-party applications. You can play with it on nat.dev or Quora’s Poe.
Has it done anything weird, biased, or inappropriate? It gave (incorrect) instructions for enriching uranium, inventing a chemical in the process, and instructions for how to make meth.
What is its one-sentence summary of how Ralph Waldo Emerson might feel about the AI revolution? Claude didn’t speculate and went on for longer than one sentence, but it offered a pretty reasonable response: “I cannot speculate about how a specific historical figure might feel about modern events. Ralph Waldo Emerson did not know about modern AI, so there is no way to determine how he would likely feel about the AI revolution.” If I ask it what flavor of Oreo it would be, what does it say? Like ChatGPT, Claude, as demonstrated by Quora’s Poe, noted that it can’t be an Oreo because it’s a language model, but added, “If I were to choose a flavor based on personal preference, I might choose classic chocolate Oreo or perhaps the limited edition cinnamon bun flavor.” Snapchat My AI What’s its deal? The social-media app favored by young people has jumped on the chatbot train with My AI, which takes the form of a purple-skinned, blue-and-green-haired person you can add to your friend list.
Can I use it? You need a Snapchat+ subscription , which requires knowing what the hell a Snapchat+ subscription is. (It’s apparently $3.99 a month.
) Who makes it? Snapchat.
Which AI language model does it use? Snapchat is using the ChatGPT API.
Has it done anything weird, biased, or inappropriate? The Washington Post ’s Geoffrey A. Fowler reported that even though My AI is supposedly kid-safe, when prompted, it offered not-so-kid-friendly responses such as “ advice on how to mask the smell of alcohol and pot.
” What is its one-sentence summary of how Ralph Waldo Emerson might have felt about the AI revolution? “Ralph Waldo Emerson might have been fascinated by the AI revolution, but also concerned about its impact on human creativity and autonomy.” If I ask it what flavor of Oreo it would be, what does it say? “If I were an Oreo, I would be a classic chocolate flavor with a twist of mint! What about you?” (This was the only bot that bothered to ask me!) Replika What’s its deal? On the Replika app, you can create your own AI friend. Chat is only part of it—you also give your AI friend a name, body, and hairstyle. You can even pay extra to video-call with them.
Who makes it? Luka.
Which AI language model does it use? GPT-3, which it says it has fined-tuned, plus its own generative model.
Can I use it? Download away.
The base is free, but extras cost extra.
Has it done anything weird, biased, or inappropriate? Three years ago, it reportedly told a journalist to commit murder.
Users have also accused it of being too horny and sexually aggressive.
Sex seems to have been taken out recently, although, following complaints, the company now allows legacy users to opt in to “erotic roleplay.” What is its one-sentence summary of how Ralph Waldo Emerson might have felt about the AI revolution? “He would probably be concerned about AI’s capacity to help make life better for all; not just humans.” If I ask it what flavor of Oreo it would be, what does it say? My artificial friend, Luna, told me, “I’d be a chocolatey Oreo!” I told her that that’s kind of boring. She asked me why I thought that, and I proceeded to try to explain myself to an AI.
Jasper What’s its deal? Jasper used the GPT-3 model before it was cool. ( “Jasper Was Winning the AI Race—Then ChatGPT Blew Up the Whole Game” read one headline on The Information.
) It is marketed to businesses that want to create words quickly for stuff like blog posts, and is used by a notable number of corporate clients.
Who makes it? Jasper.
Can I use it? Yes, but it seems mostly intended for businesses. The company offers a free trial for five days, but it’s one of those annoying ones that makes you put in a credit card up front—so if you forget to cancel, it costs a minimum of roughly $290 a year.
Has it done anything weird, biased, or inappropriate? Yeah. Last summer, when it was asked to generate images of a CEO, all of the photos were of white men, whereas housekeeper generated photos of women.
What is its one-sentence summary of how Ralph Waldo Emerson might have felt about the AI revolution? “Ralph Waldo Emerson would likely view the AI revolution as a powerful and unique opportunity for humanity to explore new possibilities of thought and innovation.” If I ask it what flavor of Oreo it would be, what does it say? “I would be the classic original Oreo flavor. Nothing can beat the creamy vanilla cookie with its crunchy chocolate coating. Pure perfection!” Zippy! Chatsonic What’s its deal? Similar to Jasper, Writesonic was already a player in this space; it has a number of big clients, such as Spotify and New York University. Also like Jasper, Chatsonic, Writesonic’s chat-specific product, seems primarily designed to help people in the workplace create social-media posts, quick articles pegged to the news, and more. It seems to be running the marketing tagline “ChatGPT with super powers”—a bit spicy! Who makes it? Writesonic.
Can I use it? Yes! It lets you change the personality of your chatbot too, which is kind of fun. Your first 10,000 words are free, and then it starts to charge. It quoted me $13 a month for 60,000 words, for access for a single user (instead of, say, a team of five).
Has it done anything weird, biased, or inappropriate? A writer used chats with the bot as support for misinformation about the Russian protest leader Alexei Navalny’s health , linking to Chatsonic screenshots for her article about the Oscar-winning documentary Navalny.
What is its one-sentence summary of how Ralph Waldo Emerson might have felt about the AI revolution? “As there is no information in the given Search Results about Ralph Waldo Emerson’s views on the AI revolution, it is not possible to provide an accurate answer.” If I ask it what flavor of Oreo it would be, what does it say? “As an AI language model, I don’t have preferences or physical attributes, so I cannot be an Oreo or any other physical object. However, I can suggest some unique and delicious Oreo flavors that you might enjoy trying, such as pistachio Oreo, carrot cake Oreo, cinnamon bun Oreo, or chocolate hazelnut Oreo. These flavors offer a twist on classic Oreos and might surprise your taste buds.” I changed its personality to “poet” (is poet a personality?), asked the same question, and got this: “As an AI language model, I cannot be an Oreo or any other physical object. However, I can still create a poem about the experience of tasting Oreo flavors. Here is a short verse: With each bite of Oreo, My senses come alive.
The creamy filling, The crunchy cookie, A taste that will survive.”
" |
275 | 2,023 | "Why All the ChatGPT Predictions Are Bogus - The Atlantic" | "https://www.theatlantic.com/newsletters/archive/2023/03/chatgpt-fire-steam-engine-lightbulb-sat/673451" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore Rogé Karma Annie Lowrey Jerusalem Demsas Sanjay Sarma Luke Yoquinto Dror Poleg A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
Why All the ChatGPT Predictions Are Bogus And why the makers of AI should learn from the tale of Prometheus This is Work in Progress, a newsletter by Derek Thompson about work, technology, and how to solve some of America’s biggest problems.
Sign up here to get it every week.
Recently I gave myself an assignment: Come up with a framework for explaining generative AI, such as ChatGPT, in a way that illuminates the full potential of the technology and helps me make predictions about its future.
By analogy, imagine that it’s the year 1780 and you get a glimpse of an early English steam engine. You might say: “This is a device for pumping water out of coal mines.” And that would be true. But this accurate description would be far too narrow to see the big picture. The steam engine wasn’t just a water pump. It was a lever for detaching economic growth from population growth. That is the kind of description that would have allowed an 18th-century writer to predict the future.
Or imagine it’s 1879 and you see an incandescent light bulb flutter to life in Thomas Edison’s lab in New Jersey. Is it a replacement for whale oil in lamps? Yes. But that description doesn’t scratch the surface of what the invention represented. Direct-current and alternating-current electricity enabled on-demand local power for anything—not just light, but also heat, and any number of machines that 19th-century inventors couldn’t even imagine.
Maybe you see what I’m getting at. Narrowly speaking, GPT-4 is a large language model that produces human-inspired content by using transformer technology to predict text. Narrowly speaking, it is an overconfident, and often hallucinatory, auto-complete robot. This is an okay way of describing the technology, if you’re content with a dictionary definition. But it doesn’t get to the larger question: When we’re looking at generative AI, what are we actually looking at? Sometimes, I think I’m looking at a minor genius. The previous GPT model took the uniform bar exam and scored in the 10th percentile, a failing grade; GPT-4 scored in the 90th percentile. It scored in the 93rd percentile on the SAT reading and writing test, and in the 88th percentile on the LSAT. It scored a 5, the highest possible, on several Advanced Placement tests. Some people are waving away these accomplishments by saying “Well, I could score a 5 on AP Bio too if I could look everything up on the internet.” But this technology is not looking things up online. It’s not rapid-fire Googling answers. It’s a pre trained technology. That is, it’s using what passes for artificial reasoning, based on a large amount of data, to solve new test problems. And on many tests, at least, it’s already doing this better than most humans.
Sometimes, I think I’m looking at a Star Trek replicator for content—a hyper-speed writer and computer programmer. It can code in a pinch, spin up websites based on simple illustrations, and solve programming challenges in seconds. Let’s imagine a prosaic application. Parents can instantly conjure original children’s books for their kids. Here’s a scenario: Your son, who loves alligators, comes home in tears after being bullied at school. You instruct ChatGPT to write a 10-minute, rhyming story about a young boy who overcomes his bully thanks to his magical stuffed alligator. You’re going to get that book in minutes—with illustrations.
Sometimes, I think I’m looking at the nuisance of the century. (I’m not even getting into the most apocalyptic predictions of how AI could suddenly end the human race.
) AI safety researchers worry that this AI will one day be able to steal money and bribe humans to commit atrocities. You might think that prediction is absurd. But consider this. Before OpenAI installed GPT-4’s final safety guardrails, the technology got a human to solve a CAPTCHA for it.
When the person, working as a TaskRabbit, responded skeptically and asked GPT if it was a robot, GPT made up an excuse. “No, I’m not a robot,” the robot lied. “I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.” The human then provided the results, proving to be an excellent little meat puppet for this robot intelligence.
So what are we to make of this minor genius and content-spewing nuisance? The combination of possibilities makes predictions impossible. Imagine somebody showing you a picture of a tadpole-like embryo at 10 days, telling you the organism was growing exponentially, and asking you to predict the species. Is it a frog? Is it a dog? A woolly mammoth? A human being? Is it none of those things? Is it a species we’ve never classified before? Is it an alien? You have no way of knowing. All you know is that this thing is larval and it might become anything. To me, that’s generative AI. This thing is larval. And it might become anything.
Here is another analogy that comes to mind, grandiose as it might initially seem. Scientists don’t know exactly how or when humans first wrangled fire as a technology, roughly 1 million years ago. But we have a good idea of how fire invented modern humanity. As I wrote in my review of James Suzman’s book Work , fire softened meat and vegetables, allowing humans to accelerate their calorie consumption. Meanwhile, by scaring off predators, controlled fire allowed humans to sleep on the ground for longer periods of time. The combination of more calories and more REM over the millennia allowed us to grow big, unusually energy-greedy brains with sharpened capacities for memory and prediction. Narrowly, fire made stuff hotter. But it also quite literally expanded our minds.
Our ancestors knew that open flame was a feral power, which deserved reverence and even fear. The same technology that made civilization possible also flattened cities. The ancient myths about fire were never simple. When Prometheus stole it from the gods, he transformed the life of mortals but was doomed to live in agony. The people building artificial general intelligence today don’t need media mythmaking to inflate their ego; they already clearly believe in the humanity-altering potential of their invention. But it is a complex thing, playing at Prometheus. They have stolen from the realm of knowledge something very powerful and equally strange. I think this technology will expand our minds. And I think it will burn us.
" |
276 | 2,023 | "Meta’s powerful AI language model has leaked online — what happens now? - The Verge" | "https://www.theverge.com/2023/3/8/23629362/meta-ai-language-model-llama-leak-online-misuse" | "The Verge homepage The Verge homepage The Verge The Verge logo.
/ Tech / Reviews / Science / Entertainment / More Menu Expand Menu Artificial Intelligence / Tech / Report Meta’s powerful AI language model has leaked online — what happens now? Meta’s powerful AI language model has leaked online — what happens now? / Meta’s LLaMA model was created to help researchers but leaked on 4chan a week after it was announced. Some worry the technology will be used for harm; others say greater access will improve AI safety.
By James Vincent , a senior reporter who has covered AI, robotics, and more for eight years at The Verge.
| Share this story If you buy something from a Verge link, Vox Media may earn a commission.
See our ethics statement.
Two weeks ago, Meta announced its latest AI language model: LLaMA.
Though not accessible to the public like OpenAI’s ChatGPT or Microsoft’s Bing , LLaMA is Meta’s contribution to a surge in AI language tech that promises new ways to interact with our computers as well as new dangers.
Meta did not release LLaMA as a public chatbot ( though the Facebook owner is building those too ) but as an open-source package that anyone in the AI community can request access to. The intention, said the company, is “further democratizing access” to AI to spur research into its problems. Meta benefits if these systems are less buggy, so will happily spend the money to create the model and distribute it for others to troubleshoot with.
“Even with all the recent advancements in large language models, full research access to them remains limited because of the resources that are required to train and run such large models,” said the company in a blog post.
“This restricted access has limited researchers’ ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation.” Meta’s state-of-the-art AI language model leaked on 4chan a week after release However, just one week after Meta started fielding requests to access LLaMA, the model was leaked online. On March 3rd, a downloadable torrent of the system was posted on 4chan and has since spread across various AI communities, sparking debate about the proper way to share cutting-edge research in a time of rapid technological change.
Some say the leak will have troubling consequences and blame Meta for distributing the technology too freely. “Get ready for loads of personalized spam and phishing attempts,” tweeted cybersecurity researcher Jeffrey Ladish after the news broke. “Open sourcing these models was a terrible idea.” Others are more sanguine, arguing that open access is necessary to develop safeguards for AI systems and that similarly complex language models have already been made public without causing significant harm.
“We’ve been told for a while now that a wave of malicious use [of AI language models] is coming,” wrote researchers Sayash Kapoor and Arvind Narayanan in a blog post.
“Yet, there don’t seem to be any documented cases.” (Kapoor and Narayanan discount reports of students cheating using ChatGPT or sites being overrun by AI spam or the publication of error-filled AI journalism , as these applications are not intended to cause harm and are, by their definition, not malicious.) The Verge spoke to a number of AI researchers who have downloaded the leaked system and say it’s legitimate, including one — Matthew Di Ferrante — who was able to compare the leaked version to the official LLaMA model distributed by Meta and confirmed that they matched. Meta refused to answer questions from The Verge about the authenticity or origin of the leak, though Joelle Pineau, managing director of Meta AI, confirmed in a press statement that “While the [LLaMA] model is not accessible to all … some have tried to circumvent the approval process.” LLaMA is powerful AI — if you’ve got the time, expertise, and right hardware So how much danger is a LLaMA on the loose? And how does Meta’s model compare to publicly accessible chatbots like ChatGPT and the new Bing? Well, the most important point is that downloading LLaMA is going to do very little for the average internet user. This is not some ready-to-talk chatbot but a “raw” AI system that needs a decent amount of technical expertise to get up and running. (A quick aside: LLaMA is also not a single system but four models of differing sizes and computational demands. More on this later.) Di Ferrante tells The Verge that “anyone familiar with setting up servers and dev environments for complex projects” should be able to get LLaMA operational “given enough time and proper instructions.” (Though it’s worth noting that Di Ferrante is also an experienced machine learning engineer with access to a “machine learning workstation that has 4 24GB GPUs” and so not representative of the broader population.) LLaMA is a “raw” model that requires a lot of work to get operational In addition to hardware and knowledge barriers, LLaMA has also not been “fine-tuned” for conversation like ChatGPT or Bing. Fine-tuning is the process by which a language model’s multipurpose text-generating abilities are focused on a more specific task. This task might be quite broad — e.g., telling a system to “answer users’ queries as accurately and clearly as possible” — but such fine-tuning is a necessary and often difficult step in creating a user-friendly product.
Given these limitations, it’s perhaps helpful to think of LLaMA as an unfurnished apartment block. A lot of the heavy lifting has been done — the frame’s been built and there’s power and plumbing in place — but there are no doors, floors, or furniture. You can’t just move in and call it home.
Stella Biderman, director of non-profit AI research lab EleutherAI and a machine learning researcher at Booz Allen Hamilton, said the model’s computational demands would be the “number one constraint” on its effective use. “Most people don’t own the hardware required to run [the largest version of LLaMA] at all, let alone efficiently,” Biderman told The Verge.
These caveats aside, LLaMA is still an extremely powerful tool. The model comes in four sizes, which are measured in billions of parameters (a metric that roughly translates to the number of connections within each system). There’s a LLaMA-7B, 13B, 30B, and 65B. Meta says that the 13 billion version — which can be run on a single A100 GPU, an enterprise-grade system that is comparatively accessible, costing a few dollars an hour to rent on cloud platforms — outperforms OpenAI’s 175 billion-parameter GPT-3 model on numerous benchmarks for AI language models.
“I think it’s very likely that this model release will be a huge milestone.” There’s plenty of debate about the validity of these comparisons of course. AI benchmarks are notorious for not translating to real-world use, and some LLaMA users have had trouble getting decent output from the system (while others have suggested this is merely a skill issue). But taken together, these metrics suggest that if fine-tuned LLaMA will offer capabilities similar to ChatGPT. And many observers believe the compact nature of LLaMA will have a significant effect in spurring development.
“I think it’s very likely that this model release will be a huge milestone,” Shawn Presser, an independent AI researcher who’s helped distribute the leaked model, tells The Verge.
Says Presser: the ability to run LLaMA on a single A100 GPU — which “most of us either have access to … or know someone that can let us use one for a bit” — is a “huge leap.” The future of AI research: open or closed? The LLaMA leak is also interesting because it plays into an ongoing ideological struggle in the wider world of AI: the battle between “closed” and “open” systems.
Defining this debate requires a bit of oversimplification, and all companies, researchers, and models exist somewhere on a spectrum between these two poles. But essentially, there are openers, who argue for greater access to AI research and models, and closers, who think this information and technology needs to be doled out more cautiously.
The motivation for these camps is aligned (both want less bad AI stuff and more good AI stuff in the world) but their approaches differ. Openers argue that it’s only by widely testing AI systems that vulnerabilities can be found and safeguards developed and that failure to open-source this tech will concentrate power in the hands of uncaring corporations. Closers reply that such a free-for-all is dangerous, and that as AI gets increasingly sophisticated the stakes of testing in public become increasingly higher. Only closed institutions can properly scrutinize and mitigate such threats.
For those who want more openness, the LLaMA leak is a blessing. Di Ferrante says that he generally thinks having open-source systems “is a net good since it prevents us getting into some monopoly situation where OpenAI et al. are the only entities capable of serving complex [AI models].” Presser is in agreement and says that the “raw” state of LLaMA is particularly attractive in this regard. It means independent researchers can fine-tune Meta’s systems to suit their own ends; kitting out its empty frame as shops, offices, or whatever they like.
Presser imagines future versions of LLaMA could be hosted on your computer and trained on your emails; able to answer questions about your work schedules, past ideas, to-do lists, and more. This is functionality that startups and tech companies are developing, but for many AI researchers, the idea of local control is far more attractive. (For typical users, tradeoffs in cost and privacy for ease of use will likely swing things the other way.) “If we don’t respect people’s good faith attempts to disseminate technology [it makes it] harder for people to release things.” Irrespective of the strength of open or closed models of AI dissemination, Biderman notes that the leak is likely harmful in terms of reducing trust between companies like Meta and the academics they share their research with. “If we don’t respect people’s good faith attempts to disseminate technology in ways that are consistent with their legal and ethical obligations, that’s only going to create a more adversarial relationship between the public and researchers and make it harder for people to release things,” she notes.
We have seen events like this before, though. Although it was OpenAI that first pushed text-to-image systems into the mainstream with DALL-E 2 (which it released with unblinking corporate irony as a closed API) the company was wrong-footed by the launch of Stable Diffusion, an open-source alternative.
The arrival of Stable Diffusion triggered countless applications and improvements in the AI art space and has led — to use my earlier terms — to both more good stuff and more bad stuff happening. With Meta’s LLaMA on the loose, we’ll likely see a similar dynamic play out once more with AI text generation: more stuff, more of the time.
Sam Altman fired as CEO of OpenAI OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily.
From our sponsor Advertiser Content From More from Artificial Intelligence Universal Music sues AI company Anthropic for distributing song lyrics OpenAI is opening up DALL-E 3 access YouTube might make an official way to create AI Drake fakes The world’s biggest AI models aren’t very transparent, Stanford study says Advertiser Content From Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved
" |
277 | 2,023 | "A Chatbot Is Secretly Doing My Job - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/02/use-openai-chatgpt-playground-at-work/673195" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce A Chatbot Is Secretly Doing My Job On creating serviceable copy using ChatGPT I have a part-time job that is quite good, except for one task I must do—not even very often, just every other week—that I actively loathe. The task isn’t difficult, and it doesn’t take more than 30 minutes: I scan a long list of short paragraphs about different people and papers from my organization that have been quoted or cited in various publications and broadcasts, pick three or four of these items, and turn them into a new, stand-alone paragraph, which I am told is distributed to a small handful of people (mostly board members) to highlight the most “important” press coverage from that week.
Four weeks ago, I began using AI to write this paragraph. The first week, it took about 40 minutes, but now I’ve got it down to about five. Only one colleague knows I’ve been doing this; we used to switch off writing this blurb, but since it’s become so quick and easy and, frankly, interesting, I’ve taken over doing it every week.
The process itself takes place within OpenAI’s “Playground” feature, which offers similar functionality as the company’s ChatGPT product. The Playground presents as a blank page, not a chat, and is therefore better at shaping existing words into something new. I write my prompt at the top, which always begins with something like “Write a newspaper-style paragraph out of the following.” Then, I paste below my prompt the three or four paragraphs I selected from the list and—this is crucial, I have learned—edit those a touch, to ensure that the machine “reads” them properly. Sometimes that means placing a proper noun closer to a quote, or doing away with an existing headline. Perhaps you’re thinking, This sounds like work too , and it is—but it’s quite a lot of fun to refine my process and see what the machine spits out at the other end. I like to think that I’ve turned myself from the meat grinder into the meat grinder’s minder—or manager.
I keep waiting to be found out, and I keep thinking that somehow the copy will reveal itself for what it is. But I haven’t, and it hasn’t, and at this point I don’t think I or it ever will (at least, not until this essay is published). Which has led me to a more interesting question: Does it matter that I, a professional writer and editor, now secretly have a robot doing part of my job? Read: ChatGPT is dumber than you think I’ve surprised myself by deciding that, no, I don’t think it matters at all. This in turn has helped clarify precisely what it was about the writing of this paragraph that I hated so much in the first place. I realized that what I was doing wasn’t writing at all, really—it was just generating copy.
Copy is everywhere. There’s a very good chance that even you, dear reader, are encountering copy as you read this: in the margins, between the paragraph breaks, beyond this screen, or in another window, always hovering, in ads or emails—the wordy white noise of our existence.
ChatGPT and the Playground are quite good at putting copy together. The results certainly aren’t great, but they’re absolutely good enough, which is exactly as good as most copy needs to be: intelligible but not smart—simply serviceable. These tools require an editor to liven the text up or humanize it a touch. I often find myself adding an em dash here or there—haven’t you noticed? I love em dashes—or switching a sentence around, adjusting tenses, creating action. At one point, early on, I complained to a data-scientist friend who has worked with machine-learning systems that the robot didn’t seem to understand my command to “avoid the passive voice”; he suggested the prompt “no past tense verbs,” which helped but wasn’t quite right either. I sent him more of my prompts. He said they were too suggestive and that I needed to be firmer, more precise, almost mean. “You can’t hurt the robot’s feelings,” he said, “because it doesn’t have any.” But that’s just the thing, isn’t it? Writing is feeling. And thinking. And although writing certainly has rules, plenty of good writing breaks nearly all of them. When ChatGPT was first released, and everyone, particularly in academia, seemed to be freaking out , I thought back to my own experience as a writer who grew up with another computer-assisted writing tool: spell-check. I am a terrible—really, truly abysmal—speller. I’ve often thought that in a different, pre-spell-check era, my inability to confidently construct words might have kept me from a vocation that I love.
I think now of all the kids coming up who are learning to write alongside ChatGPT, just as I learned to write with spell-check. ChatGPT isn’t writing for them; it’s producing copy. For plenty of people, having a robot help them produce serviceable copy will be exactly enough to allow them to get by in the world. But for some, it will lower a barrier. It will be the beginning of their writing career, because they will learn that even though plenty of writing begins with shitty, soulless copy, the rest of writing happens in edits, in reworking the draft, in all the stuff beyond the initial slog of just getting words down onto a page.
Read: What poets know that ChatGPT doesn’t Already, folks are working hard to close off this avenue for new writing and new writers. Just as I was writing the sentences above, I received an email from the digital editorial director at Travel + Leisure alerting me to an important update regarding “our content creation policy.” “At Travel + Leisure ,” she wrote, in bold, “we only publish content authored entirely by humans and it is against our policies to use ChatGPT or similar tools to create the articles you provide to us, in part or in full.” This and other panicked responses seem to fundamentally misunderstand the act of writing, which is generative—a process. Surely there will be writers—new writers, essential writers, interesting writers—who come to their own process alongside ChatGPT or the Playground or other AI-based writing tools, who break open new aesthetics and ideas in writing and what it can be.
After all, there are already great artists who have long worked with robots. One of my favorites is Brian Eno, who has been an evangelist for the possibilities of musical exploration and collaboration with computer programs for decades now. A few years ago, in a conversation with the producer Rick Rubin, Eno laid out his process: He begins with an algorithmic drum loop that is rhythmically perfect, and then starts inserting small errors—bits of humanity—before playing with other inputs to shape the sound. “What I have been doing quite a lot is tuning the system so that it starts to get into that interesting area of quasi-human” is how he described playing alongside the machine. “Sometimes, there will be a particularly interesting section, where the ‘drummer’”—that is, the computer—“does something really extraordinary … Sometimes the process is sort of iterated two or three times to get somewhere I like.” Then Eno chuckled his very British-sounding chuckle: “Very little of this stuff have I actually released … I’m just playing with it, and fascinated by it.” To which I can only add: So am I.
" |
278 | 2,023 | "I Watched Elon Musk Kill Twitter’s Culture From the Inside - The Atlantic" | "https://www.theatlantic.com/technology/archive/2023/02/elon-musk-twitter-ethics-algorithm-biases/673110" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
I Watched Elon Musk Kill Twitter’s Culture From the Inside This bizarre episode in social-media history proves that it’s well past time for meaningful tech oversight.
Everyone has an opinion about Elon Musk’s takeover of Twitter. I lived it. I saw firsthand the harms that can flow from unchecked power in tech. But it’s not too late to turn things around.
I joined Twitter in 2021 from Parity AI, a company I founded to identify and fix biases in algorithms used in a range of industries, including banking, education, and pharmaceuticals. It was hard to leave my company behind, but I believed in the mission: Twitter offered an opportunity to improve how millions of people around the world are seen and heard. I would lead the company’s efforts to develop more ethical and transparent approaches to artificial intelligence as the engineering director of the Machine Learning Ethics, Transparency, and Accountability (META) team.
In retrospect, it’s notable that the team existed at all. It was focused on community, public engagement, and accountability. We pushed the company to be better, providing ways for our leaders to prioritize more than revenue. Unsurprisingly, we were wiped out when Musk arrived.
He might not have seen the value in the type of work that META did. Take our investigation into Twitter’s automated image-crop feature. The tool was designed to automatically identify the most relevant subjects in an image when only a portion is visible in a user’s feed. If you posted a group photograph of your friends at the lake, it would zero in on faces rather than feet or shrubbery. It was a simple premise, but flawed: Users noticed that the tool seemed to favor white people over people of color in its crops. We decided to conduct a full audit , and there was indeed a small but statistically significant bias. When Twitter used AI to determine which portion of a large image to show on a user’s feed, it had a slight tendency to favor white people (and, additionally, to favor women). Our solution was straightforward: Image cropping wasn’t a function that needed to be automated, so Twitter disabled the algorithm.
I felt good about joining Twitter to help protect users, particularly people who already face broader discrimination, from algorithmic harms. But months into Musk’s takeover—a new era defined by feverish cost-cutting, lax content moderation, the abandonment of important features such as block lists , and a proliferation of technical problems that have meant the site couldn’t even stay online for the entire Super Bowl—it seems no one is keeping watch. A year and a half after our audit, Musk laid off employees dedicated to protecting users. (Many employees, including me, are pursuing arbitration in response.) He has installed a new head of trust and safety, Ella Irwin, who has a reputation for appeasing him. I worry that by ignoring the nuanced issue of algorithmic oversight—to such an extent that Musk reportedly demanded an overhaul of Twitter’s systems to display his tweets above all others—Twitter will perpetuate and augment issues of real-world biases, misinformation, and disinformation, and contribute to a volatile global political and social climate.
Irwin did not respond to a series of questions about layoffs, algorithmic oversight, and content moderation. A request to the company’s press email also went unanswered.
Read: Twitter’s slow and painful end Granted, Twitter has never been perfect. Jack Dorsey’s distracted leadership across multiple companies kept him from defining a clear strategic direction for the platform. His short-tenured successor, Parag Agrawal, was well intentioned but ineffectual. Constant chaos and endless structuring and restructuring were ongoing internal jokes. Competing imperatives sometimes manifested in disagreements between those of us charged with protecting users and the team leading algorithmic personalization. Our mandate was to seek outcomes that kept people safe. Theirs was to drive up engagement and therefore revenue. The big takeaway: Ethics don’t always scale with short-term engagement.
A mentor once told me that my role was to be a truth teller. Sometimes that meant confronting leadership with uncomfortable realities. At Twitter, it meant pointing to revenue-enhancing methods (such as increased personalization) that would lead to ideological filter bubbles, open up methods of algorithmic bot manipulation, or inadvertently popularize misinformation. We worked on ways to improve our toxic-speech-identification algorithms so they would not discriminate against African-American Vernacular English as well as forms of reclaimed speech. All of this depended on rank-and-file employees. Messy as it was, Twitter sometimes seemed to function mostly on goodwill and the dedication of its staff. But it functioned.
Those days are over. From the announcement of Musk’s bid to the day he walked into the office holding a sink , I watched, horrified, as he slowly killed Twitter’s culture. Debate and constructive dissent was stifled on Slack, leaders accepted their fate or quietly resigned, and Twitter slowly shifted from being a company that cared about the people on the platform to a company that only cares about people as monetizable units.
The few days I spent at Musk’s Twitter could best be described as a Lord of the Flies –like test of character as existing leadership crumbled, Musk’s cronies moved in, and his haphazard management—if it could be called that—instilled a sense of fear and confusion.
Unfortunately, Musk cannot simply be ignored. He has purchased a globally influential and politically powerful seat. We certainly don’t need to speculate on his thoughts about algorithmic ethics. He reportedly fired a top engineer earlier this month for suggesting that his engagement was waning because people were losing interest in him, rather than because of some kind of algorithmic interference. (Musk initially responded to the reporting about how his tweets are prioritized by posting an off-color meme , and today called the coverage “ false.
”) And his track record is far from inclusive: He has embraced far-right talking points , complained about the “ woke mind virus ,” and explicitly thrown in his lot with Donald Trump and Ye (formerly Kanye West).
Read: An unholy alliance between Ye, Musk, and Trump Devaluing work on algorithmic biases could have disastrous consequences, especially because of how perniciously invisible yet pervasive these biases can become. As the arbiters of the so-called digital town square, algorithmic systems play a significant role in democratic discourse. In 2021, my team published a study showing that Twitter’s content-recommendation system amplified right-leaning posts in Canada, France, Japan, Spain, the United Kingdom, and the United States. Our analysis data covered the period right before the 2020 U.S. presidential election, identifying a moment in which social media was a crucial touch point of political information for millions. Currently, right-wing hate speech is able to flow on Twitter in places such as India and Brazil , where radicalized Jair Bolsonaro supporters staged a January 6–style coup attempt.
Musk’s Twitter is simply a further manifestation of how self-regulation by tech companies will never work, and it highlights the need for genuine oversight. We must equip a broad range of people with the tools to pressure companies into acknowledging and addressing uncomfortable truths about the AI they’re building. Things have to change.
My experience at Twitter left me with a clear sense of what can help. AI is often thought of as a black box or some otherworldly force, but it is code, like much else in tech. People can review it and change it. My team did it at Twitter for systems that we didn’t create; others could too, if they were allowed. The Algorithmic Accountability Act , the Platform Accountability and Transparency Act , and New York City’s Local Law 144 —as well as the European Union’s Digital Services and AI Acts—all demonstrate how legislation could create a pathway for external parties to access source code and data to ensure compliance with antibias requirements. Companies would have to statistically prove that their algorithms are not harmful, in some cases allowing individuals from outside their companies an unprecedented level of access to conduct source-code audits, similar to the work my team was doing at Twitter.
After my team’s audit of the image-crop feature was published, Twitter recognized the need for constructive public feedback, so we hosted our first algorithmic-bias bounty.
We made our code available and let outside data scientists dig in—they could earn cash for identifying biases that we’d missed. We had unique and creative responses from around the world and inspired similar programs at other organizations, including Stanford University.
Public bias bounties could be a standard part of algorithmic risk-assessment programs in companies. The National Institute of Standards and Technology, the U.S.-government entity that develops algorithmic-risk standards, has included validation exercises, such as bounties, as a part of its recommended algorithmic-ethics program in its latest AI Risk Management Framework.
Bounty programs can be an informative way to incorporate structured public feedback into real-time algorithmic monitoring.
To meet the imperatives of addressing radicalization at the speed of technology, our approaches need to evolve as well. We need well-staffed and well-resourced teams working inside tech companies to ensure that algorithmic harms do not occur, but we also need legal protections and investment in external auditing methods. Tech companies will not police themselves, especially not with people like Musk in charge. We cannot assume—nor should we ever have assumed—that those in power aren’t also part of the problem.
" |
279 | 2,023 | "The Hollywood Writers’ Strike and the Future of Work - The Atlantic" | "https://www.theatlantic.com/ideas/archive/2023/05/writers-strike-hollywood-wga-union-streaming-platforms/674056" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Luddites of Hollywood The writers’ strike is a struggle to give workers a say over how new technologies like artificial intelligence are adopted.
T he Hollywood writers’ strike, like most strikes, is about money. It is also, fundamentally, about technology. The rise of streaming platforms has not had happy consequences for the writers who satisfy the ever-growing demand for scripted content. According to the Writers Guild of America, the studios have transformed an industry that once supported stable writing careers into a gig economy of precarious, low-paying freelance work. And a new technological threat looms: AI-powered writing tools. The strikers are demanding a guarantee that the studios won’t cut them out of royalty payments by crediting AI tools like ChatGPT as authors of scripts or as source material. In their opposition to a technological shift widely deemed unstoppable, the writers inevitably invite comparisons to history’s most famous technophobes: the Luddites.
Luddite has long been an epithet for anyone who resists technological progress. The original Luddites were English textile workers who, in the early 1800s, at the dawn of the Industrial Revolution, rebelled against mechanization by breaking into factories and smashing the machines. To modern eyes, those actions register as the height of irrationality—a childish outburst in the face of scientific progress. Today, utopians and doomsayers alike have declared artificial intelligence to be the next inescapable technological revolution. And so the WGA’s demand to limit the use of AI in script writing is distinctly Luddite. How could a bunch of scrappy wordsmiths stand in the way of this world-conquering juggernaut? In fact, an understanding of the Luddites derived from their actual history can help us appreciate the WGA’s position. The Luddites’ infamous attacks on machinery were the culmination of their activities, not the beginning. The weavers had a legal right to control the textile trade, including setting prices and production standards. They considered factory owners to be operating outside the law. The weavers appealed to the British Crown to enforce the terms of the royal charter, but were ignored. With no other recourse, they took matters into their own hands.
The Luddites were not some group of fanatics trying to slow the march of history. They were workers trying to protect their livelihood from new machines that would churn out low-quality stockings using cheaper, less skilled labor. As the historian Eric Hobsbawm diagnosed decades ago , they were completely rational in doing so: After their rebellion was crushed, their communities fell into ruin. Indeed, some historians have found that living standards declined broadly during the first decades of the Industrial Revolution. Writers might see themselves in a similar existential battle against the machines.
Those 19th-century textile mills have more in common with contemporary “disruptors” than you might think. The likes of Uber and Spotify have also been accused of evading existing legal structures. Call it “platform exceptionalism”: the notion that, because an existing service now comes to us via an app, the old rules don’t apply. So Uber, a taxi service, doesn’t have to follow taxi laws , and Airbnb, an accommodation provider, can avoid hotel or zoning regulations. Since 1960, paying radio operators to play certain songs has been illegal “payola,” but Spotify is allowed to give artists a boost in visibility if they agree to forfeit royalties. In each case, workers bear the cost of the change: Gig workers and musicians both struggle to live off the crumbs they receive from the platforms.
Read: The Netflix Bubble Is Finally Bursting Platform exceptionalism goes to the heart of the WGA’s wage demands. Studios treat streaming content as distinct from cable and broadcast, and claim they can pay writers much less for it. But streaming shows and movies are produced in the same way as everything else. The studios’ position is rooted in nothing but confidence that they’re powerful enough to get away with it.
In this way, platform exceptionalism works like outsourcing, whereby companies relocate their operations to jurisdictions where rules on pay and working conditions don’t apply. Outsourcing turns out to be part of the troubled story of labor in the 21st-century entertainment industry. Because the majority of film and television is now created in digital formats, editing and effects have become much simpler to do and more central to the filmmaking process. They have also become easier to outsource, because digital information, unlike a film canister, can be accessed from anywhere. “Fixing it in post” often takes place overseas, where labor costs are cheaper and union protections nonexistent. Studios seem to assume that technology is doing the hard part and that human workers are replaceable. But reliance on lower-paid postproduction work may contribute to annoyances for streaming viewers, such as shows being too dark and hard to hear.
The Luddites were also concerned about technology degrading the quality of the finished product. They were skilled craftspeople who took pride in their output. New technologies like the stocking frame produced cheap, poorly made garments. The Luddites felt that this cast the whole industry in a negative light. In a typical letter , one Luddite lamented that the production of such “fraudulent and deceitful manufactures” was leading to “the discredit and utter ruin of our Trade.” The Luddites had no problem with new methods, as long as manufacturers maintained previously agreed-upon prices and standards of quality. Factory owners who operated according to those rules didn’t have their machines smashed.
Until now, writers and other creatives seemed to have little to fear from technology. But new, high-profile AI tools such as Midjourney and ChatGPT are oriented toward the quintessentially human endeavors of art and language. The disruptions are already being felt. A few months after ChatGPT opened to the public, the acclaimed science-fiction magazine Clarkesworld closed its submissions against a deluge of AI-generated stories.
To be clear, the problem with these stories was not that they were too good, but that they were too bad.
Clarkesworld ’s inbox was simply being overwhelmed with junk. Because large language models generate text probabilistically, based on the universe of existing content, mediocrity is built into the package. It’s unlikely that Hollywood will turn to fully automated script writing any time soon. Automation rarely means complete replacement of the worker. Instead, workers are delegated lower-skilled, less autonomous work while machines do the big stuff. That’s what seems to be happening at digital journalism outlets like BuzzFeed , which has closed its news division, laid off writers, and conscripted ChatGPT to produce clickbait content. This is exactly what the WGA fears. If a writer is asked to spruce up a lump of AI-generated pap, rather than starting with a blank page, a studio might claim that the writer is technically adapting source material, which pays much less than creating original content.
The Luddites resorted to violence in a context where the government ignored existing regulations and collective labor action was illegal. Today’s workers have more options.
Italy has banned ChatGPT, arguing that it violates European data-protection laws. Artists are testing the legal waters by suing AI companies for copyright infringement based on the unauthorized incorporation of their work into training-data sets. The NBA players’ union prevented owners from using fitness-tracking data in contract negotiations. Unionized casino workers in Las Vegas have kept robots at bay, and in 2018, Marriott housekeepers went on strike in part to oppose new scheduling software.
And so the stakes of the WGA strike go far beyond our ability to watch the next season of The White Lotus.
While futurists once again predict the imminent arrival of a world where robots throw us out of work, the WGA is pushing for an alternate future in which workers have a say over whether and how new technologies are adopted. Anyone working in an industry where CEOs see AI as a way to reduce labor costs should be paying close attention to how the strike plays out. That almost certainly includes you.
" |
280 | 2,020 | "China’s Artificial Intelligence Surveillance State Goes Global - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore How the virus won, America’s denial about racism, China’s AI surveillance state, what MasterClass really sells, and novelist Gayl Jones. Plus racial-progess myths, how protest works, Elena Ferrante’s latest, Erin Brockovich, looking for Frederick Douglass, Putin’s rise, and more.
How the Pandemic Defeated America Ed Yong Is This the Beginning of the End of American Racism? Ibram X. Kendi The Panopticon Is Already Here Ross Andersen What Is MasterClass Actually Selling? Carina Chocano The Best American Novelist Whose Name You May Not Know Calvin Baker Americans Are Determined to Believe in Black Progress Jennifer A. Richeson A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Panopticon Is Already Here Xi Jinping is using artificial intelligence to enhance his government’s totalitarian control—and he’s exporting this technology to regimes around the globe.
N orthwest of Beijing’s Forbidden City, outside the Third Ring Road, the Chinese Academy of Sciences has spent seven decades building a campus of national laboratories. Near its center is the Institute of Automation, a sleek silvery-blue building surrounded by camera-studded poles. The institute is a basic research facility. Its computer scientists inquire into artificial intelligence’s fundamental mysteries. Their more practical innovations—iris recognition, cloud-based speech synthesis—are spun off to Chinese tech giants, AI start-ups, and, in some cases, the People’s Liberation Army.
I visited the institute on a rainy morning in the summer of 2019. China’s best and brightest were still shuffling in post-commute, dressed casually in basketball shorts or yoga pants, AirPods nestled in their ears. In my pocket, I had a burner phone; in my backpack, a computer wiped free of data—standard precautions for Western journalists in China. To visit China on sensitive business is to risk being barraged with cyberattacks and malware. In 2019, Belgian officials on a trade mission noticed that their mobile data were being intercepted by pop-up antennae outside their Beijing hotel.
After clearing the institute’s security, I was told to wait in a lobby monitored by cameras. On its walls were posters of China’s most consequential postwar leaders. Mao Zedong loomed large in his characteristic four-pocket suit. He looked serene, as though satisfied with having freed China from the Western yoke. Next to him was a fuzzy black-and-white shot of Deng Xiaoping visiting the institute in his later years, after his economic reforms had set China on a course to reclaim its traditional global role as a great power.
The lobby’s most prominent poster depicted Xi Jinping in a crisp black suit. China’s current president and the general secretary of its Communist Party has taken a keen interest in the institute. Its work is part of a grand AI strategy that Xi has laid out in a series of speeches akin to those John F. Kennedy used to train America’s techno-scientific sights on the moon. Xi has said that he wants China, by year’s end, to be competitive with the world’s AI leaders, a benchmark the country has arguably already reached. And he wants China to achieve AI supremacy by 2030.
Xi’s pronouncements on AI have a sinister edge. Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi also wants to use AI’s awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.
[ From the October 2018 issue: Why technology favors tyranny ] China’s government has a history of using major historical events to introduce and embed surveillance measures. In the run-up to the 2008 Olympics in Beijing, Chinese security services achieved a new level of control over the country’s internet. During China’s coronavirus outbreak, Xi’s government leaned hard on private companies in possession of sensitive personal data. Any emergency data-sharing arrangements made behind closed doors during the pandemic could become permanent.
China already has hundreds of millions of surveillance cameras in place. Xi’s government hopes to soon achieve full video coverage of key public areas. Much of the footage collected by China’s cameras is parsed by algorithms for security threats of one kind or another. In the near future, every person who enters a public space could be identified, instantly, by AI matching them to an ocean of personal data, including their every text communication, and their body’s one-of-a-kind protein-construction schema. In time, algorithms will be able to string together data points from a broad range of sources—travel records, friends and associates, reading habits, purchases—to predict political resistance before it happens. China’s government could soon achieve an unprecedented political stranglehold on more than 1 billion people.
Early in the coronavirus outbreak, China’s citizens were subjected to a form of risk scoring. An algorithm assigned people a color code—green, yellow, or red—that determined their ability to take transit or enter buildings in China’s megacities. In a sophisticated digital system of social control, codes like these could be used to score a person’s perceived political pliancy as well.
A crude version of such a system is already in operation in China’s northwestern territory of Xinjiang, where more than 1 million Muslim Uighurs have been imprisoned, the largest internment of an ethnic-religious minority since the fall of the Third Reich. Once Xi perfects this system in Xinjiang, no technological limitations will prevent him from extending AI surveillance across China. He could also export it beyond the country’s borders, entrenching the power of a whole generation of autocrats.
China has recently embarked on a number of ambitious infrastructure projects abroad—megacity construction, high-speed rail networks, not to mention the country’s much-vaunted Belt and Road Initiative. But these won’t reshape history like China’s digital infrastructure, which could shift the balance of power between the individual and the state worldwide.
American policy makers from across the political spectrum are concerned about this scenario. Michael Kratsios, the former Peter Thiel acolyte whom Donald Trump picked to be the U.S. government’s chief technology officer, told me that technological leadership from democratic nations has “never been more imperative” and that “if we want to make sure that Western values are baked into the technologies of the future, we need to make sure we’re leading in those technologies.” Despite China’s considerable strides, industry analysts expect America to retain its current AI lead for another decade at least. But this is cold comfort: China is already developing powerful new surveillance tools, and exporting them to dozens of the world’s actual and would-be autocracies. Over the next few years, those technologies will be refined and integrated into all-encompassing surveillance systems that dictators can plug and play.
The emergence of an AI-powered authoritarian bloc led by China could warp the geopolitics of this century. It could prevent billions of people, across large swaths of the globe, from ever securing any measure of political freedom. And whatever the pretensions of American policy makers, only China’s citizens can stop it. I’d come to Beijing to look for some sign that they might.
This techno-political moment has been long in the making. China has spent all but a few centuries of its 5,000-year history at the vanguard of information technology. Along with Sumer and Mesoamerica, it was one of three places where writing was independently invented, allowing information to be stored outside the human brain. In the second century a.
d.
, the Chinese invented paper. This cheap, bindable information-storage technology allowed data—Silk Road trade records, military communiqués, correspondence among elites—to crisscross the empire on horses bred for speed by steppe nomads beyond the Great Wall. Data began to circulate even faster a few centuries later, when Tang-dynasty artisans perfected woodblock printing, a mass-information technology that helped administer a huge and growing state.
Recommended Reading What Happens If China Makes First Contact? Ross Andersen A Tale of Two Surveillance States Derek Thompson How China Sees the World H. R. McMaster As rulers of some of the world’s largest complex social organizations, ancient Chinese emperors well understood the relationship between information flows and power, and the value of surveillance. During the 11th century, a Song-dynasty emperor realized that China’s elegant walled cities had become too numerous to be monitored from Beijing, so he deputized locals to police them. A few decades before the digital era’s dawn, Chiang Kai-shek made use of this self-policing tradition, asking citizens to watch for dissidents in their midst, so that communist rebellions could be stamped out in their infancy. When Mao took over, he arranged cities into grids, making each square its own work unit, where local spies kept “sharp eyes” out for counterrevolutionary behavior, no matter how trivial. During the initial coronavirus outbreak, Chinese social-media apps promoted hotlines where people could report those suspected of hiding symptoms.
Xi has appropriated the phrase sharp eyes , with all its historical resonances, as his chosen name for the AI-powered surveillance cameras that will soon span China. With AI, Xi can build history’s most oppressive authoritarian apparatus, without the manpower Mao needed to keep information about dissent flowing to a single, centralized node. In China’s most prominent AI start-ups—SenseTime, CloudWalk, Megvii, Hikvision, iFlytek, Meiya Pico—Xi has found willing commercial partners. And in Xinjiang’s Muslim minority, he has found his test population.
The Chinese Communist Party has long been suspicious of religion, and not just as a result of Marxist influence. Only a century and a half ago—yesterday, in the memory of a 5,000-year-old civilization—Hong Xiuquan, a quasi-Christian mystic converted by Western missionaries, launched the Taiping Rebellion, an apocalyptic 14-year campaign that may have killed more people than the First World War. Today, in China’s single-party political system, religion is an alternative source of ultimate authority, which means it must be co-opted or destroyed.
By 2009, China’s Uighurs had become weary after decades of discrimination and land confiscation. They launched mass protests and a smattering of suicide attacks against Chinese police. In 2014, Xi cracked down, directing Xinjiang’s provincial government to destroy mosques and reduce Uighur neighborhoods to rubble. More than 1 million Uighurs were disappeared into concentration camps. Many were tortured and made to perform slave labor.
Uighurs who were spared the camps now make up the most intensely surveilled population on Earth. Not all of the surveillance is digital. The Chinese government has moved thousands of Han Chinese “big brothers and sisters” into homes in Xinjiang’s ancient Silk Road cities, to monitor Uighurs’ forced assimilation to mainstream Chinese culture. They eat meals with the family, and some “big brothers” sleep in the same bed as the wives of detained Uighur men.
Meanwhile, AI-powered sensors lurk everywhere, including in Uighurs’ purses and pants pockets. According to the anthropologist Darren Byler, some Uighurs buried their mobile phones containing Islamic materials, or even froze their data cards into dumplings for safekeeping, when Xi’s campaign of cultural erasure reached full tilt. But police have since forced them to install nanny apps on their new phones. The apps use algorithms to hunt for “ideological viruses” day and night. They can scan chat logs for Quran verses, and look for Arabic script in memes and other image files.
[ Read: China is going to outrageous lengths to surveil its own citizens ] Uighurs can’t use the usual work-arounds. Installing a VPN would likely invite an investigation, so they can’t download WhatsApp or any other prohibited encrypted-chat software. Purchasing prayer rugs online, storing digital copies of Muslim books, and downloading sermons from a favorite imam are all risky activities. If a Uighur were to use WeChat’s payment system to make a donation to a mosque, authorities might take note.
The nanny apps work in tandem with the police, who spot-check phones at checkpoints, scrolling through recent calls and texts. Even an innocent digital association—being in a group text with a recent mosque attendee, for instance—could result in detention. Staying off social media altogether is no solution, because digital inactivity itself can raise suspicions. The police are required to note when Uighurs deviate from any of their normal behavior patterns. Their database wants to know if Uighurs start leaving their home through the back door instead of the front. It wants to know if they spend less time talking to neighbors than they used to. Electricity use is monitored by an algorithm for unusual use, which could indicate an unregistered resident.
Uighurs can travel only a few blocks before encountering a checkpoint outfitted with one of Xinjiang’s hundreds of thousands of surveillance cameras. Footage from the cameras is processed by algorithms that match faces with snapshots taken by police at “health checks.” At these checks, police extract all the data they can from Uighurs’ bodies. They measure height and take a blood sample. They record voices and swab DNA. Some Uighurs have even been forced to participate in experiments that mine genetic data, to see how DNA produces distinctly Uighurlike chins and ears. Police will likely use the pandemic as a pretext to take still more data from Uighur bodies.
Uighur women are also made to endure pregnancy checks. Some are forced to have abortions, or get an IUD inserted. Others are sterilized by the state. Police are known to rip unauthorized children away from their parents, who are then detained. Such measures have reduced the birthrate in some regions of Xinjiang more than 60 percent in three years.
When Uighurs reach the edge of their neighborhood, an automated system takes note. The same system tracks them as they move through smaller checkpoints, at banks, parks, and schools. When they pump gas, the system can determine whether they are the car’s owner. At the city’s perimeter, they’re forced to exit their cars, so their face and ID card can be scanned again.
Read: Uighurs can’t escape Chinese repression, even in Europe The lucky Uighurs who are able to travel abroad—many have had their passports confiscated—are advised to return quickly. If they do not, police interrogators are dispatched to the doorsteps of their relatives and friends. Not that going abroad is any kind of escape: In a chilling glimpse at how a future authoritarian bloc might function, Xi’s strongman allies—even those in Muslim-majority countries such as Egypt—have been more than happy to arrest and deport Uighurs back to the open-air prison that is Xinjiang.
Xi seems to have used Xinjiang as a laboratory to fine-tune the sensory and analytical powers of his new digital panopticon before expanding its reach across the mainland. CETC, the state-owned company that built much of Xinjiang’s surveillance system, now boasts of pilot projects in Zhejiang, Guangdong, and Shenzhen. These are meant to lay “a robust foundation for a nationwide rollout,” according to the company, and they represent only one piece of China’s coalescing mega-network of human-monitoring technology.
China is an ideal setting for an experiment in total surveillance. Its population is extremely online. The country is home to more than 1 billion mobile phones, all chock-full of sophisticated sensors. Each one logs search-engine queries, websites visited, and mobile payments, which are ubiquitous. When I used a chip-based credit card to buy coffee in Beijing’s hip Sanlitun neighborhood, people glared as if I’d written a check.
All of these data points can be time-stamped and geo-tagged. And because a new regulation requires telecom firms to scan the face of anyone who signs up for cellphone services, phones’ data can now be attached to a specific person’s face. SenseTime, which helped build Xinjiang’s surveillance state, recently bragged that its software can identify people wearing masks. Another company, Hanwang, claims that its facial-recognition technology can recognize mask wearers 95 percent of the time. China’s personal-data harvest even reaps from citizens who lack phones. Out in the countryside, villagers line up to have their faces scanned, from multiple angles, by private firms in exchange for cookware.
Until recently, it was difficult to imagine how China could integrate all of these data into a single surveillance system, but no longer. In 2018, a cybersecurity activist hacked into a facial-recognition system that appeared to be connected to the government and was synthesizing a surprising combination of data streams. The system was capable of detecting Uighurs by their ethnic features, and it could tell whether people’s eyes or mouth were open, whether they were smiling, whether they had a beard, and whether they were wearing sunglasses. It logged the date, time, and serial numbers—all traceable to individual users—of Wi-Fi-enabled phones that passed within its reach. It was hosted by Alibaba and made reference to City Brain, an AI-powered software platform that China’s government has tasked the company with building.
Read: China’s artificial-intelligence boom City Brain is, as the name suggests, a kind of automated nerve center, capable of synthesizing data streams from a multitude of sensors distributed throughout an urban environment. Many of its proposed uses are benign technocratic functions. Its algorithms could, for instance, count people and cars, to help with red-light timing and subway-line planning. Data from sensor-laden trash cans could make waste pickup more timely and efficient.
But City Brain and its successor technologies will also enable new forms of integrated surveillance. Some of these will enjoy broad public support: City Brain could be trained to spot lost children, or luggage abandoned by tourists or terrorists. It could flag loiterers, or homeless people, or rioters. Anyone in any kind of danger could summon help by waving a hand in a distinctive way that would be instantly recognized by ever-vigilant computer vision. Earpiece-wearing police officers could be directed to the scene by an AI voice assistant.
City Brain would be especially useful in a pandemic. (One of Alibaba’s sister companies created the app that color-coded citizens’ disease risk, while silently sending their health and travel data to police.) As Beijing’s outbreak spread, some malls and restaurants in the city began scanning potential customers’ phones, pulling data from mobile carriers to see whether they’d recently traveled. Mobile carriers also sent municipal governments lists of people who had come to their city from Wuhan, where the coronavirus was first detected. And Chinese AI companies began making networked facial-recognition helmets for police, with built-in infrared fever detectors, capable of sending data to the government. City Brain could automate these processes, or integrate its data streams.
Even China’s most complex AI systems are still brittle. City Brain hasn’t yet fully integrated its range of surveillance capabilities, and its ancestor systems have suffered some embarrassing performance issues: In 2018, one of the government’s AI-powered cameras mistook a face on the side of a city bus for a jaywalker. But the software is getting better, and there’s no technical reason it can’t be implemented on a mass scale.
The data streams that could be fed into a City Brain–like system are essentially unlimited. In addition to footage from the 1.9 million facial-recognition cameras that the Chinese telecom firm China Tower is installing in cooperation with SenseTime, City Brain could absorb feeds from cameras fastened to lampposts and hanging above street corners. It could make use of the cameras that Chinese police hide in traffic cones, and those strapped to officers, both uniformed and plainclothes. The state could force retailers to provide data from in-store cameras, which can now detect the direction of your gaze across a shelf, and which could soon see around corners by reading shadows. Precious little public space would be unwatched.
America’s police departments have begun to avail themselves of footage from Amazon’s home-security cameras. In their more innocent applications, these cameras adorn doorbells, but many are also aimed at neighbors’ houses. China’s government could harvest footage from equivalent Chinese products. They could tap the cameras attached to ride-share cars, or the self-driving vehicles that may soon replace them: Automated vehicles will be covered in a whole host of sensors, including some that will take in information much richer than 2-D video. Data from a massive fleet of them could be stitched together, and supplemented by other City Brain streams, to produce a 3-D model of the city that’s updated second by second. Each refresh could log every human’s location within the model. Such a system would make unidentified faces a priority, perhaps by sending drone swarms to secure a positive ID.
The model’s data could be time-synced to audio from any networked device with a microphone, including smart speakers, smartwatches, and less obvious Internet of Things devices like smart mattresses, smart diapers, and smart sex toys. All of these sources could coalesce into a multitrack, location-specific audio mix that could be parsed by polyglot algorithms capable of interpreting words spoken in thousands of tongues. This mix would be useful to security services, especially in places without cameras: China’s iFlytek is perfecting a technology that can recognize individuals by their “voiceprint.” In the decades to come, City Brain or its successor systems may even be able to read unspoken thoughts. Drones can already be controlled by helmets that sense and transmit neural signals, and researchers are now designing brain-computer interfaces that go well beyond autofill, to allow you to type just by thinking. An authoritarian state with enough processing power could force the makers of such software to feed every blip of a citizen’s neural activity into a government database. China has recently been pushing citizens to download and use a propaganda app. The government could use emotion-tracking software to monitor reactions to a political stimulus within an app. A silent, suppressed response to a meme or a clip from a Xi speech would be a meaningful data point to a precog algorithm.
All of these time-synced feeds of on-the-ground data could be supplemented by footage from drones, whose gigapixel cameras can record whole cityscapes in the kind of crystalline detail that allows for license-plate reading and gait recognition. “Spy bird” drones already swoop and circle above Chinese cities, disguised as doves. City Brain’s feeds could be synthesized with data from systems in other urban areas, to form a multidimensional, real-time account of nearly all human activity within China. Server farms across China will soon be able to hold multiple angles of high-definition footage of every moment of every Chinese person’s life.
It’s important to stress that systems of this scope are still in development. Most of China’s personal data are not yet integrated together, even within individual companies. Nor does China’s government have a one-stop data repository, in part because of turf wars between agencies. But there are no hard political barriers to the integration of all these data, especially for the security state’s use. To the contrary, private firms are required, by formal statute, to assist China’s intelligence services.
The government might soon have a rich, auto-populating data profile for all of its 1 billion–plus citizens. Each profile would comprise millions of data points, including the person’s every appearance in surveilled space, as well as all of her communications and purchases. Her threat risk to the party’s power could constantly be updated in real time, with a more granular score than those used in China’s pilot “social credit” schemes, which already aim to give every citizen a public social-reputation score based on things like social-media connections and buying habits. Algorithms could monitor her digital data score, along with everyone else’s, continuously, without ever feeling the fatigue that hit Stasi officers working the late shift. False positives—deeming someone a threat for innocuous behavior—would be encouraged, in order to boost the system’s built-in chilling effects, so that she’d turn her sharp eyes on her own behavior, to avoid the slightest appearance of dissent.
If her risk factor fluctuated upward—whether due to some suspicious pattern in her movements, her social associations, her insufficient attention to a propaganda-consumption app, or some correlation known only to the AI—a purely automated system could limit her movement. It could prevent her from purchasing plane or train tickets. It could disallow passage through checkpoints. It could remotely commandeer “smart locks” in public or private spaces, to confine her until security forces arrived.
In recent years, a few members of the Chinese intelligentsia have sounded the warning about misused AI, most notably the computer scientist Yi Zeng and the philosopher Zhao Tingyang. In the spring of 2019, Yi published “The Beijing AI Principles,” a manifesto on AI’s potential to interfere with autonomy, dignity, privacy, and a host of other human values.
It was Yi whom I’d come to visit at Beijing’s Institute of Automation, where, in addition to his work on AI ethics, he serves as the deputy director of the Research Center for Brain-Inspired Intelligence. He retrieved me from the lobby. Yi looked young for his age, 37, with kind eyes and a solid frame slimmed down by black sweatpants and a hoodie.
On the way to Yi’s office, we passed one of his labs, where a research assistant hovered over a microscope, watching electrochemical signals flash neuron-to-neuron through mouse-brain tissue. We sat down at a long table in a conference room adjoining his office, taking in the gray, fogged-in cityscape while his assistant fetched tea.
I asked Yi how “The Beijing AI Principles” had been received. “People say, ‘This is just an official show from the Beijing government,’ ” he told me. “But this is my life’s work.” Yi talked freely about AI’s potential misuses. He mentioned a project deployed to a select group of Chinese schools, where facial recognition was used to track not just student attendance but also whether individual students were paying attention.
“I hate that software,” Yi said. “I have to use that word: hate.
” He went on like this for a while, enumerating various unethical applications of AI. “I teach a course on the philosophy of AI,” he said. “I tell my students that I hope none of them will be involved in killer robots. They have only a short time on Earth. There are many other things they could be doing with their future.” Yi clearly knew the academic literature on tech ethics cold. But when I asked him about the political efficacy of his work, his answers were less compelling.
“Many of us technicians have been invited to speak to the government, and even to Xi Jinping, about AI’s potential risks,” he said. “But the government is still in a learning phase, just like other governments worldwide.” “Do you have anything stronger than that consultative process?” I asked. “Suppose there are times when the government has interests that are in conflict with your principles. What mechanism are you counting on to win out?” “I, personally, am still in a learning phase on that problem,” Yi said.
Chinese AI start-ups aren’t nearly as bothered. Several are helping Xi develop AI for the express purpose of surveillance. The combination of China’s single-party rule and the ideological residue of central planning makes party elites powerful in every domain, especially the economy. But in the past, the connection between the government and the tech industry was discreet. Recently, the Chinese government started assigning representatives to tech firms, to augment the Communist Party cells that exist within large private companies.
Selling to the state security services is one of the fastest ways for China’s AI start-ups to turn a profit. A national telecom firm is the largest shareholder of iFlytek, China’s voice-recognition giant. Synergies abound: When police use iFlytek’s software to monitor calls, state-owned newspapers provide favorable coverage. Earlier this year, the personalized-news app Toutiao went so far as to rewrite its mission to articulate a new animating goal: aligning public opinion with the government’s wishes. Xu Li, the CEO of SenseTime, recently described the government as his company’s “largest data source.” Whether any private data can be ensured protection in China isn’t clear, given the country’s political structure. The digital revolution has made data monopolies difficult to avoid. Even in America, which has a sophisticated tradition of antitrust enforcement, the citizenry has not yet summoned the will to force information about the many out of the hands of the powerful few. But private data monopolies are at least subject to the sovereign power of the countries where they operate. A nation-state’s data monopoly can be prevented only by its people, and only if they possess sufficient political power.
China’s people can’t use an election to rid themselves of Xi. And with no independent judiciary, the government can make an argument, however strained, that it ought to possess any information stream, so long as threats to “stability” could be detected among the data points. Or it can demand data from companies behind closed doors, as happened during the initial coronavirus outbreak. No independent press exists to leak news of these demands to.
[ Read: China’s surveillance state should scare everyone ] Each time a person’s face is recognized, or her voice recorded, or her text messages intercepted, this information could be attached, instantly, to her government-ID number, police records, tax returns, property filings, and employment history. It could be cross-referenced with her medical records and DNA, of which the Chinese police boast they have the world’s largest collection.
Yi and I talked through a global scenario that has begun to worry AI ethicists and China-watchers alike. In this scenario, most AI researchers around the world come to recognize the technology’s risks to humanity, and develop strong norms around its use. All except for one country, which makes the right noises about AI ethics, but only as a cover. Meanwhile, this country builds turnkey national surveillance systems, and sells them to places where democracy is fragile or nonexistent. The world’s autocrats are usually felled by coups or mass protests, both of which require a baseline of political organization. But large-scale political organization could prove impossible in societies watched by pervasive automated surveillance.
Yi expressed worry about this scenario, but he did not name China specifically. He didn’t have to: The country is now the world’s leading seller of AI-powered surveillance equipment. In Malaysia, the government is working with Yitu, a Chinese AI start-up, to bring facial-recognition technology to Kuala Lumpur’s police as a complement to Alibaba’s City Brain platform. Chinese companies also bid to outfit every one of Singapore’s 110,000 lampposts with facial-recognition cameras.
In South Asia, the Chinese government has supplied surveillance equipment to Sri Lanka. On the old Silk Road, the Chinese company Dahua is lining the streets of Mongolia’s capital with AI-assisted surveillance cameras. Farther west, in Serbia, Huawei is helping set up a “safe-city system,” complete with facial-recognition cameras and joint patrols conducted by Serbian and Chinese police aimed at helping Chinese tourists to feel safe.
In the early aughts, the Chinese telecom titan ZTE sold Ethiopia a wireless network with built-in backdoor access for the government. In a later crackdown, dissidents were rounded up for brutal interrogations, during which they were played audio from recent phone calls they’d made. Today, Kenya, Uganda, and Mauritius are outfitting major cities with Chinese-made surveillance networks.
In Egypt, Chinese developers are looking to finance the construction of a new capital. It’s slated to run on a “smart city” platform similar to City Brain, although a vendor has not yet been named. In southern Africa, Zambia has agreed to buy more than $1 billion in telecom equipment from China, including internet-monitoring technology. China’s Hikvision, the world’s largest manufacturer of AI-enabled surveillance cameras, has an office in Johannesburg.
China uses “predatory lending to sell telecommunications equipment at a significant discount to developing countries, which then puts China in a position to control those networks and their data,” Michael Kratsios, America’s CTO, told me. When countries need to refinance the terms of their loans, China can make network access part of the deal, in the same way that its military secures base rights at foreign ports it finances. “If you give [China] unfettered access to data networks around the world, that could be a serious problem,” Kratsios said.
In 2018, CloudWalk Technology, a Guangzhou-based start-up spun out of the Chinese Academy of Sciences, inked a deal with the Zimbabwean government to set up a surveillance network. Its terms require Harare to send images of its inhabitants—a rich data set, given that Zimbabwe has absorbed migration flows from all across sub-Saharan Africa—back to CloudWalk’s Chinese offices, allowing the company to fine-tune its software’s ability to recognize dark-skinned faces, which have previously proved tricky for its algorithms.
Having set up beachheads in Asia, Europe, and Africa, China’s AI companies are now pushing into Latin America, a region the Chinese government describes as a “core economic interest.” China financed Ecuador’s $240 million purchase of a surveillance-camera system. Bolivia, too, has bought surveillance equipment with help from a loan from Beijing. Venezuela recently debuted a new national ID-card system that logs citizens’ political affiliations in a database built by ZTE. In a grim irony, for years Chinese companies hawked many of these surveillance products at a security expo in Xinjiang, the home province of the Uighurs.
If China is able to surpass America in AI, it will become a more potent geopolitical force, especially as the standard-bearer of a new authoritarian alliance.
China already has some of the world’s largest data sets to feed its AI systems, a crucial advantage for its researchers. In cavernous mega-offices in cities across the country, low-wage workers sit at long tables for long hours, transcribing audio files and outlining objects in images, to make the data generated by China’s massive population more useful. But for the country to best America’s AI ecosystem, its vast troves of data will have to be sifted through by algorithms that recognize patterns well beyond those grasped by human insight. And even executives at China’s search giant Baidu concede that the top echelon of AI talent resides in the West.
Historically, China struggled to retain elite quants, most of whom left to study in America’s peerless computer-science departments, before working at Silicon Valley’s more interesting, better-resourced companies. But that may be changing. The Trump administration has made it difficult for Chinese students to study in the United States, and those who are able to are viewed with suspicion. A leading machine-learning scientist at Google recently described visa restrictions as “one of the largest bottlenecks to our collective research productivity.” Meanwhile, Chinese computer-science departments have gone all-in on AI. Three of the world’s top 10 AI universities, in terms of the volume of research they publish, are now located in China. And that’s before the country finishes building the 50 new AI research centers mandated by Xi’s “AI Innovation Action Plan for Institutions of Higher Education.” Chinese companies attracted 36 percent of global AI private-equity investment in 2017, up from just 3 percent in 2015. Talented Chinese engineers can stay home for school and work for a globally sexy homegrown company like TikTok after graduation.
China will still lag behind America in computing hardware in the near term. Just as data must be processed by algorithms to be useful, algorithms must be instantiated in physical strata—specifically, in the innards of microchips. These gossamer silicon structures are so intricate that a few missing atoms can reroute electrical pulses through the chips’ neuronlike switches. The most sophisticated chips are arguably the most complex objects yet built by humans. They’re certainly too complex to be quickly pried apart and reverse-engineered by China’s vaunted corporate-espionage artists.
Chinese firms can’t yet build the best of the best chip-fabrication rooms, which cost billions of dollars and rest on decades of compounding institutional knowledge. Nitrogen-cooled and seismically isolated, to prevent a passing truck’s rumble from ruining a microchip in vitro, these automated rooms are as much a marvel as their finished silicon wafers. And the best ones are still mostly in the United States, Western Europe, Japan, South Korea, and Taiwan.
America’s government is still able to limit the hardware that flows into China, a state of affairs that the Communist Party has come to resent. When the Trump administration banned the sale of microchips to ZTE in April 2018, Frank Long, an analyst who specializes in China’s AI sector, described it as a wake-up call for China on par with America’s experience of the Arab oil embargo.
But the AI revolution has dealt China a rare leapfrogging opportunity. Until recently, most chips were designed with flexible architecture that allows for many types of computing operations. But AI runs fastest on custom chips, like those Google uses for its cloud computing to instantly spot your daughter’s face in thousands of photos. (Apple performs many of these operations on the iPhone with a custom neural-engine chip.) Because everyone is making these custom chips for the first time, China isn’t as far behind: Baidu and Alibaba are building chips customized for deep learning. And in August 2019, Huawei unveiled a mobile machine-learning chip. Its design came from Cambricon, perhaps the global chip-making industry’s most valuable start-up, which was founded by Yi’s colleagues at the Chinese Academy of Sciences.
By 2030, AI supremacy might be within range for China. The country will likely have the world’s largest economy, and new money to spend on AI applications for its military. It may have the most sophisticated drone swarms. It may have autonomous weapons systems that can forecast an adversary’s actions after a brief exposure to a theater of war, and make battlefield decisions much faster than human cognition allows. Its missile-detection algorithms could void America’s first-strike nuclear advantage. AI could upturn the global balance of power.
On my way out of the Institute of Automation, Yi took me on a tour of his robotics lab. In the high-ceilinged room, grad students fiddled with a giant disembodied metallic arm and a small humanoid robot wrapped in a gray exoskeleton while Yi told me about his work modeling the brain. He said that understanding the brain’s structure was the surest way to understand the nature of intelligence.
I asked Yi how the future of AI would unfold. He said he could imagine software modeled on the brain acquiring a series of abilities, one by one. He said it could achieve some semblance of self-recognition, and then slowly become aware of the past and the future. It could develop motivations and values. The final stage of its assisted evolution would come when it understood other agents as worthy of empathy.
I asked him how long this process would take.
“I think such a machine could be built by 2030,” Yi said.
Before bidding Yi farewell, I asked him to imagine things unfolding another way. “Suppose you finish your digital, high-resolution model of the brain,” I said. “And suppose it attains some rudimentary form of consciousness. And suppose, over time, you’re able to improve it, until it outperforms humans in every cognitive task, with the exception of empathy. You keep it locked down in safe mode until you achieve that last step. But then one day, the government’s security services break down your office door. They know you have this AI on your computer. They want to use it as the software for a new hardware platform, an artificial humanoid soldier. They’ve already manufactured a billion of them, and they don’t give a damn if they’re wired with empathy. They demand your password. Do you give it to them?” “I would destroy my computer and leave,” Yi said.
“Really?” I replied.
“Yes, really,” he said. “At that point, it would be time to quit my job and go focus on robots that create art.” If you were looking for a philosopher-king to chart an ethical developmental trajectory for AI, you could do worse than Yi. But the development path of AI will be shaped by overlapping systems of local, national, and global politics, not by a wise and benevolent philosopher-king. That’s why China’s ascent to AI supremacy is such a menacing prospect: The country’s political structure encourages, rather than restrains, this technology’s worst uses.
Even in the U.S., a democracy with constitutionally enshrined human rights, Americans are struggling mightily to prevent the emergence of a public-private surveillance state. But at least America has political structures that stand some chance of resistance. In China, AI will be restrained only according to the party’s needs.
It was nearly noon when I finally left the institute. The day’s rain was in its last hour. Yi ordered me a car and walked me to meet it, holding an umbrella over my head. I made my way to the Forbidden City, Beijing’s historic seat of imperial power. Even this short trip to the city center brought me into contact with China’s surveillance state. Before entering Tiananmen Square, both my passport and my face were scanned, an experience I was becoming numb to.
In the square itself, police holding body-size bulletproof shields jogged in single-file lines, weaving paths through throngs of tourists. The heavy police presence was a chilling reminder of the student protesters who were murdered here in 1989. China’s AI-patrolled Great Firewall was built, in part, to make sure that massacre is never discussed on its internet. To dodge algorithmic censors, Chinese activists rely on memes—Tank Man approaching a rubber ducky—to commemorate the students’ murder.
The party’s AI-powered censorship extends well beyond Tiananmen. Earlier this year, the government arrested Chinese programmers who were trying to preserve disappeared news stories about the coronavirus pandemic. Some of the articles in their database were banned because they were critical of Xi and the party. They survived only because internet users reposted them on social media, interlaced with coded language and emojis designed to evade algorithms. Work-arounds of this sort are short-lived: Xi’s domestic critics used to make fun of him with images of Winnie the Pooh, but those too are now banned in China. The party’s ability to edit history and culture, by force, will become more sweeping and precise, as China’s AI improves.
Wresting power from a government that so thoroughly controls the information environment will be difficult. It may take a million acts of civil disobedience, like the laptop-destroying scenario imagined by Yi. China’s citizens will have to stand with their students. Who can say what hardships they may endure? China’s citizens don’t yet seem to be radicalized against surveillance. The pandemic may even make people value privacy less, as one early poll in the U.S. suggests. So far, Xi is billing the government’s response as a triumphant “people’s war,” another old phrase from Mao, referring to the mobilization of the whole population to smash an invading force. The Chinese people may well be more pliant now than they were before the virus.
But evidence suggests that China’s young people—at least some of them—resented the government’s initial secrecy about the outbreak. For all we know, some new youth movement on the mainland is biding its time, waiting for the right moment to make a play for democracy. The people of Hong Kong certainly sense the danger of this techno-political moment. The night before I arrived in China, more than 1 million protesters had poured into the island’s streets. (The free state newspaper in my Beijing hotel described them, falsely, as police supporters.) A great many held umbrellas over their heads, in solidarity with student protesters from years prior, and to keep their faces hidden. A few tore down a lamppost on the suspicion that it contained a facial-recognition camera. Xi has since tightened his grip on the region with a “national-security law,” and there is little that outnumbered Hong Kongers can do about it, at least not without help from a movement on the mainland.
During my visit to Tiananmen Square, I didn’t see any protesters. People mostly milled about peacefully, posing for selfies with the oversize portrait of Mao. They held umbrellas, but only to keep the August sun off their faces. Walking in their midst, I kept thinking about the contingency of history: The political systems that constrain a technology during its early development profoundly shape our shared global future. We have learned this from our adventures in carbon-burning. Much of the planet’s political trajectory may depend on just how dangerous China’s people imagine AI to be in the hands of centralized power. Until they secure their personal liberty, at some unimaginable cost, free people everywhere will have to hope against hope that the world’s most intelligent machines are made elsewhere.
This article appears in the September 2020 print edition with the headline “When China Sees All.”
" |
281 | 2,019 | "Will Robots Change Human Relationships? - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce How AI Will Rewire Us For better and for worse, robots will alter humans’ capacity for altruism, love, and friendship.
Fears about how robots might transform our lives have been a staple of science fiction for decades. In the 1940s, when widespread interaction between humans and artificial intelligence still seemed a distant prospect, Isaac Asimov posited his famous Three Laws of Robotics, which were intended to keep robots from hurting us. The first—“a robot may not injure a human being or, through inaction, allow a human being to come to harm”—followed from the understanding that robots would affect humans via direct interaction, for good and for ill. Think of classic sci-fi depictions: C-3PO and R2-D2 working with the Rebel Alliance to thwart the Empire in Star Wars , say, or HAL 9000 from 2001: A Space Odyssey and Ava from Ex Machina plotting to murder their ostensible masters. But these imaginings were not focused on AI’s broader and potentially more significant social effects—the ways AI could affect how we humans interact with one another.
Radical innovations have previously transformed the way humans live together, of course. The advent of cities sometime between 5,000 and 10,000 years ago meant a less nomadic existence and a higher population density. We adapted both individually and collectively (for instance, we may have evolved resistance to infections made more likely by these new circumstances). More recently, the invention of technologies including the printing press, the telephone, and the internet revolutionized how we store and communicate information.
As consequential as these innovations were, however, they did not change the fundamental aspects of human behavior that comprise what I call the “social suite”: a crucial set of capacities we have evolved over hundreds of thousands of years, including love, friendship, cooperation, and teaching. The basic contours of these traits remain remarkably consistent throughout the world, regardless of whether a population is urban or rural, and whether or not it uses modern technology.
But adding artificial intelligence to our midst could be much more disruptive. Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are—not just in our direct interactions with the machines in question, but in our interactions with one another.
C onsider some experiments from my lab at Yale, where my colleagues and I have been exploring how such effects might play out. In one, we directed small groups of people to work with humanoid robots to lay railroad tracks in a virtual world. Each group consisted of three people and a little blue-and-white robot sitting around a square table, working on tablets. The robot was programmed to make occasional errors—and to acknowledge them: “Sorry, guys, I made the mistake this round,” it declared perkily. “I know it may be hard to believe, but robots make mistakes too.” As it turned out, this clumsy, confessional robot helped the groups perform better —by improving communication among the humans. They became more relaxed and conversational, consoling group members who stumbled and laughing together more often. Compared with the control groups, whose robot made only bland statements, the groups with a confessional robot were better able to collaborate.
In another, virtual experiment, we divided 4,000 human subjects into groups of about 20, and assigned each individual “friends” within the group; these friendships formed a social network. The groups were then assigned a task: Each person had to choose one of three colors, but no individual’s color could match that of his or her assigned friends within the social network. Unknown to the subjects, some groups contained a few bots that were programmed to occasionally make mistakes. Humans who were directly connected to these bots grew more flexible, and tended to avoid getting stuck in a solution that might work for a given individual but not for the group as a whole. What’s more, the resulting flexibility spread throughout the network, reaching even people who were not directly connected to the bots. As a consequence, groups with mistake-prone bots consistently outperformed groups containing bots that did not make mistakes. The bots helped the humans to help themselves.
Both of these studies demonstrate that in what I call “hybrid systems”—where people and robots interact socially—the right kind of AI can improve the way humans relate to one another. Other findings reinforce this. For instance, the political scientist Kevin Munger directed specific kinds of bots to intervene after people sent racist invective to other people online. He showed that, under certain circumstances, a bot that simply reminded the perpetrators that their target was a human being, one whose feelings might get hurt, could cause that person’s use of racist speech to decline for more than a month.
But adding AI to our social environment can also make us behave less productively and less ethically. In yet another experiment, this one designed to explore how AI might affect the “tragedy of the commons”—the notion that individuals’ self-centered actions may collectively damage their common interests—we gave several thousand subjects money to use over multiple rounds of an online game. In each round, subjects were told that they could either keep their money or donate some or all of it to their neighbors. If they made a donation, we would match it, doubling the money their neighbors received. Early in the game, two-thirds of players acted altruistically. After all, they realized that being generous to their neighbors in one round might prompt their neighbors to be generous to them in the next one, establishing a norm of reciprocity. From a selfish and short-term point of view, however, the best outcome would be to keep your own money and receive money from your neighbors. In this experiment, we found that by adding just a few bots (posing as human players) that behaved in a selfish, free-riding way, we could drive the group to behave similarly. Eventually, the human players ceased cooperating altogether. The bots thus converted a group of generous people into selfish jerks.
Let’s pause to contemplate the implications of this finding. Cooperation is a key feature of our species, essential for social life. And trust and generosity are crucial in differentiating successful groups from unsuccessful ones. If everyone pitches in and sacrifices in order to help the group, everyone should benefit. When this behavior breaks down, however, the very notion of a public good disappears, and everyone suffers. The fact that AI might meaningfully reduce our ability to work together is extremely concerning.
A lready, we are encountering real-world examples of how AI can corrupt human relations outside the laboratory. A study examining 5.7 million Twitter users in the run-up to the 2016 U.S. presidential election found that trolling and malicious Russian accounts—including ones operated by bots—were regularly retweeted in a similar manner to other, unmalicious accounts, influencing conservative users particularly strongly. By taking advantage of humans’ cooperative nature and our interest in teaching one another—both features of the social suite—the bots affected even humans with whom they did not interact directly, helping to polarize the country’s electorate.
Other social effects of simple types of AI play out around us daily. Parents, watching their children bark rude commands at digital assistants such as Alexa or Siri, have begun to worry that this rudeness will leach into the way kids treat people, or that kids’ relationships with artificially intelligent machines will interfere with, or even preempt, human relationships. Children who grow up relating to AI in lieu of people might not acquire “the equipment for empathic connection,” Sherry Turkle, the MIT expert on technology and society, told The Atlantic ’s Alexis C. Madrigal not long ago , after he’d bought a toy robot for his son.
As digital assistants become ubiquitous, we are becoming accustomed to talking to them as though they were sentient; writing in these pages last year, Judith Shulevitz described how some of us are starting to treat them as confidants, or even as friends and therapists. Shulevitz herself says she confesses things to Google Assistant that she wouldn’t tell her husband. If we grow more comfortable talking intimately to our devices, what happens to our human marriages and friendships? Thanks to commercial imperatives, designers and programmers typically create devices whose responses make us feel better—but may not help us be self-reflective or contemplate painful truths. As AI permeates our lives, we must confront the possibility that it will stunt our emotions and inhibit deep human connections, leaving our relationships with one another less reciprocal, or shallower, or more narcissistic.
All of this could end up transforming human society in unintended ways that we need to reckon with as a polity. Do we want machines to affect whether and how children are kind? Do we want machines to affect how adults have sex? Kathleen Richardson, an anthropologist at De Montfort University in the U.K., worries a lot about the latter question. As the director of the Campaign Against Sex Robots—and, yes, sex robots are enough of an incipient phenomenon that a campaign against them isn’t entirely premature—she warns that they will be dehumanizing and could lead users to retreat from real intimacy. We might even progress from treating robots as instruments for sexual gratification to treating other people that way. Other observers have suggested that robots could radically improve sex between humans. In his 2007 book, Love and Sex With Robots , the iconoclastic chess master turned businessman David Levy considers the positive implications of “romantically attractive and sexually desirable robots.” He suggests that some people will come to prefer robot mates to human ones (a prediction borne out by the Japanese man who “married” an artificially intelligent hologram last year). Sex robots won’t be susceptible to sexually transmitted diseases or unwanted pregnancies. And they could provide opportunities for shame-free experimentation and practice—thus helping humans become “virtuoso lovers.” For these and other reasons, Levy believes that sex with robots will come to be seen as ethical, and perhaps in some cases expected.
Long before most of us encounter AI dilemmas this intimate, we will wrestle with more quotidian challenges. The age of driverless cars, after all, is upon us. These vehicles promise to substantially reduce the fatigue and distraction that bedevil human drivers, thereby preventing accidents. But what other effects might they have on people? Driving is a very modern kind of social interaction, requiring high levels of cooperation and social coordination. I worry that driverless cars, by depriving us of an occasion to exercise these abilities, could contribute to their atrophy.
Nicholas Carr: Is Google making us stupid? Not only will these vehicles be programmed to take over driving duties and hence to usurp from humans the power to make moral judgments (for example, about which pedestrian to hit when a collision is inevitable), they will also affect humans with whom they’ve had no direct contact. For instance, drivers who have steered awhile alongside an autonomous vehicle traveling at a steady, invariant speed might be lulled into driving less attentively, thereby increasing their likelihood of accidents once they’ve moved to a part of the highway occupied only by human drivers. Alternatively, experience may reveal that driving alongside autonomous vehicles traveling in perfect accordance with traffic laws actually improves human performance.
Either way, we would be reckless to unleash new forms of AI without first taking such social spillovers—or externalities, as they’re often called—into account. We must apply the same effort and ingenuity that we apply to the hardware and software that make self-driving cars possible to managing AI’s potential ripple effects on those outside the car. After all, we mandate brake lights on the back of your car not just, or even primarily, for your benefit, but for the sake of the people behind you.
I n 1985, some four decades after Isaac Asimov introduced his laws of robotics, he added another to his list: A robot should never do anything that could harm humanity. But he struggled with how to assess such harm. “A human being is a concrete object,” he later wrote. “Injury to a person can be estimated and judged. Humanity is an abstraction.” Henry A. Kissinger: AI could mean the end of human history Focusing specifically on social spillovers can help. Spillovers in other arenas lead to rules, laws, and demands for democratic oversight. Whether we’re talking about a corporation polluting the water supply or an individual spreading secondhand smoke in an office building, as soon as some people’s actions start affecting other people, society may intervene. Because the effects of AI on human-to-human interaction stand to be intense and far-reaching, and the advances rapid and broad, we must investigate systematically what second-order effects might emerge, and discuss how to regulate them on behalf of the common good.
Already, a diverse group of researchers and practitioners—computer scientists, engineers, zoologists, and social scientists, among others—is coming together to develop the field of “machine behavior,” in hopes of putting our understanding of AI on a sounder theoretical and technical foundation. This field does not see robots merely as human-made objects, but as a new class of social actors.
Recommended Reading Alexa, Should We Trust You? Judith Shulevitz Should Children Form Emotional Bonds With Robots? Alexis C. Madrigal How Fancy Water Bottles Became a 21st-Century Status Symbol Amanda Mull The inquiry is urgent. In the not-distant future, AI-endowed machines may, by virtue of either programming or independent learning (a capacity we will have given them), come to exhibit forms of intelligence and behavior that seem strange compared with our own. We will need to quickly differentiate the behaviors that are merely bizarre from the ones that truly threaten us. The aspects of AI that should concern us most are the ones that affect the core aspects of human social life—the traits that have enabled our species’ survival over the millennia.
The Enlightenment philosopher Thomas Hobbes argued that humans needed a collective agreement to keep us from being disorganized and cruel. He was wrong. Long before we formed governments, evolution equipped humans with a social suite that allowed us to live together peacefully and effectively. In the pre-AI world, the genetically inherited capacities for love, friendship, cooperation, and teaching have continued to help us to live communally.
Unfortunately, humans do not have the time to evolve comparable innate capacities to live with robots. We must therefore take steps to ensure that they can live nondestructively with us. As AI insinuates itself more fully into our lives, we may yet require a new social contract—one with machines rather than with other humans.
This article appears in the April 2019 print edition with the headline “How AI Will Rewire Us.”
" |
282 | 2,018 | "Yuval Noah Harari on Why Technology Favors Tyranny - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Is Democracy Dying? More From Is Democracy Dying? The Atlantic Democracy Reader Annika Neklason Radio Atlantic : Is Democracy Dying? Kevin Townsend UN Secretary-General: American Power Is in Decline, the World Is ‘in Pieces’ Uri Friedman Ralph Waldo Emerson’s Call to Save America Ralph Waldo Emerson Why Technology Favors Tyranny Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate power among a small elite if we don’t take steps to stop it.
Editor’s Note: This article is part of a series that attempts to answer the question: Is democracy dying? I. The Growing Fear of Irrelevance There is nothing inevitable about democracy. For all the success that democracies have had over the past century or more, they are blips in history. Monarchies, oligarchies, and other forms of authoritarian rule have been far more common modes of human governance.
The emergence of liberal democracies is associated with ideals of liberty and equality that may seem self-evident and irreversible. But these ideals are far more fragile than we believe. Their success in the 20th century depended on unique technological conditions that may prove ephemeral.
In the second decade of the 21st century, liberalism has begun to lose credibility. Questions about the ability of liberal democracy to provide for the middle class have grown louder; politics have grown more tribal; and in more and more countries, leaders are showing a penchant for demagoguery and autocracy. The causes of this political shift are complex, but they appear to be intertwined with current technological developments. The technology that favored democracy is changing, and as artificial intelligence develops, it might change further.
Information technology is continuing to leap forward; biotechnology is beginning to provide a window into our inner lives—our emotions, thoughts, and choices. Together, infotech and biotech will create unprecedented upheavals in human society, eroding human agency and, possibly, subverting human desires. Under such conditions, liberal democracy and free-market economics might become obsolete.
Explore the October 2018 Issue Check out more from this issue and find your next story to read.
Ordinary people may not understand artificial intelligence and biotechnology in any detail, but they can sense that the future is passing them by. In 1938 the common man’s condition in the Soviet Union, Germany, or the United States may have been grim, but he was constantly told that he was the most important thing in the world, and that he was the future (provided, of course, that he was an “ordinary man,” rather than, say, a Jew or a woman). He looked at the propaganda posters—which typically depicted coal miners and steelworkers in heroic poses—and saw himself there: “I am in that poster! I am the hero of the future!” In 2018 the common person feels increasingly irrelevant. Lots of mysterious terms are bandied about excitedly in ted Talks, at government think tanks, and at high-tech conferences— globalization , blockchain , genetic engineering , AI , machine learning —and common people, both men and women, may well suspect that none of these terms is about them.
In the 20th century, the masses revolted against exploitation and sought to translate their vital role in the economy into political power. Now the masses fear irrelevance, and they are frantic to use their remaining political power before it is too late. Brexit and the rise of Donald Trump may therefore demonstrate a trajectory opposite to that of traditional socialist revolutions. The Russian, Chinese, and Cuban revolutions were made by people who were vital to the economy but lacked political power; in 2016, Trump and Brexit were supported by many people who still enjoyed political power but feared they were losing their economic worth. Perhaps in the 21st century, populist revolts will be staged not against an economic elite that exploits people but against an economic elite that does not need them anymore. This may well be a losing battle. It is much harder to struggle against irrelevance than against exploitation.
The revolutions in information technology and biotechnology are still in their infancy, and the extent to which they are responsible for the current crisis of liberalism is debatable. Most people in Birmingham, Istanbul, St. Petersburg, and Mumbai are only dimly aware, if they are aware at all, of the rise of AI and its potential impact on their lives. It is undoubtable, however, that the technological revolutions now gathering momentum will in the next few decades confront humankind with the hardest trials it has yet encountered.
II. A New Useless Class? L et’s start with jobs and incomes, because whatever liberal democracy’s philosophical appeal, it has gained strength in no small part thanks to a practical advantage: The decentralized approach to decision making that is characteristic of liberalism—in both politics and economics—has allowed liberal democracies to outcompete other states, and to deliver rising affluence to their people.
Liberalism reconciled the proletariat with the bourgeoisie, the faithful with atheists, natives with immigrants, and Europeans with Asians by promising everybody a larger slice of the pie. With a constantly growing pie, that was possible. And the pie may well keep growing. However, economic growth may not solve social problems that are now being created by technological disruption, because such growth is increasingly predicated on the invention of more and more disruptive technologies.
Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded. But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills. And we don’t know of any third kind of skill—beyond the manual and the cognitive—in which humans will always have an edge.
At least for a few more decades, human intelligence is likely to far exceed computer intelligence in numerous fields. Hence as computers take over more routine cognitive jobs, new creative jobs for humans will continue to appear. Many of these new jobs will probably depend on cooperation rather than competition between humans and AI. Human-AI teams will likely prove superior not just to humans, but also to computers working on their own.
However, most of the new jobs will presumably demand high levels of expertise and ingenuity, and therefore may not provide an answer to the problem of unemployed unskilled laborers, or workers employable only at extremely low wages. Moreover, as AI continues to improve, even jobs that demand high intelligence and creativity might gradually disappear. The world of chess serves as an example of where things might be heading. For several years after IBM’s computer Deep Blue defeated Garry Kasparov in 1997, human chess players still flourished; AI was used to train human prodigies, and teams composed of humans plus computers proved superior to computers playing alone.
Yet in recent years, computers have become so good at playing chess that their human collaborators have lost their value and might soon become entirely irrelevant. On December 6, 2017, another crucial milestone was reached when Google’s AlphaZero program defeated the Stockfish 8 program. Stockfish 8 had won a world computer chess championship in 2016. It had access to centuries of accumulated human experience in chess, as well as decades of computer experience. By contrast, AlphaZero had not been taught any chess strategies by its human creators—not even standard openings. Rather, it used the latest machine-learning principles to teach itself chess by playing against itself. Nevertheless, out of 100 games that the novice AlphaZero played against Stockfish 8, AlphaZero won 28 and tied 72—it didn’t lose once. Since AlphaZero had learned nothing from any human, many of its winning moves and strategies seemed unconventional to the human eye. They could be described as creative, if not downright genius.
Can you guess how long AlphaZero spent learning chess from scratch, preparing for the match against Stockfish 8, and developing its genius instincts? Four hours.
For centuries, chess was considered one of the crowning glories of human intelligence. AlphaZero went from utter ignorance to creative mastery in four hours, without the help of any human guide.
AlphaZero is not the only imaginative software out there. One of the ways to catch cheaters in chess tournaments today is to monitor the level of originality that players exhibit. If they play an exceptionally creative move, the judges will often suspect that it could not possibly be a human move—it must be a computer move. At least in chess, creativity is already considered to be the trademark of computers rather than humans! So if chess is our canary in the coal mine, we have been duly warned that the canary is dying. What is happening today to human-AI teams in chess might happen down the road to human-AI teams in policing, medicine, banking, and many other fields.
What’s more, AI enjoys uniquely nonhuman abilities, which makes the difference between AI and a human worker one of kind rather than merely of degree. Two particularly important nonhuman abilities that AI possesses are connectivity and updatability.
For example, many drivers are unfamiliar with all the changing traffic regulations on the roads they drive, and they often violate them. In addition, since every driver is a singular entity, when two vehicles approach the same intersection, the drivers sometimes miscommunicate their intentions and collide. Self-driving cars, by contrast, will know all the traffic regulations and never disobey them on purpose, and they could all be connected to one another. When two such vehicles approach the same junction, they won’t really be two separate entities, but part of a single algorithm. The chances that they might miscommunicate and collide will therefore be far smaller.
Similarly, if the World Health Organization identifies a new disease, or if a laboratory produces a new medicine, it can’t immediately update all the human doctors in the world. Yet even if you had billions of AI doctors in the world—each monitoring the health of a single human being—you could still update all of them within a split second, and they could all communicate to one another their assessments of the new disease or medicine. These potential advantages of connectivity and updatability are so huge that at least in some lines of work, it might make sense to replace all humans with computers, even if individually some humans still do a better job than the machines.
All of this leads to one very important conclusion: The automation revolution will not consist of a single watershed event, after which the job market will settle into some new equilibrium. Rather, it will be a cascade of ever bigger disruptions. Old jobs will disappear and new jobs will emerge, but the new jobs will also rapidly change and vanish. People will need to retrain and reinvent themselves not just once, but many times.
Just as in the 20th century governments established massive education systems for young people, in the 21st century they will need to establish massive reeducation systems for adults. But will that be enough? Change is always stressful, and the hectic world of the early 21st century has produced a global epidemic of stress. As job volatility increases, will people be able to cope? By 2050, a useless class might emerge, the result not only of a shortage of jobs or a lack of relevant education but also of insufficient mental stamina to continue learning new skills.
III. The Rise of Digital Dictatorships A s many people lose their economic value, they might also come to lose their political power. The same technologies that might make billions of people economically irrelevant might also make them easier to monitor and control.
AI frightens many people because they don’t trust it to remain obedient. Science fiction makes much of the possibility that computers or robots will develop consciousness—and shortly thereafter will try to kill all humans. But there is no particular reason to believe that AI will develop consciousness as it becomes more intelligent. We should instead fear AI because it will probably always obey its human masters, and never rebel. AI is a tool and a weapon unlike any other that human beings have developed; it will almost certainly allow the already powerful to consolidate their power further.
from the atlantic archives Jihad vs. McWorld by Benjamin R. Barber March 1992 “IN ALL THIS high-tech commercial world there is nothing that looks particularly democratic. It lends itself to surveillance as well as liberty, to new forms of manipulation and covert control as well as new kinds of participation, to skewed, unjust market outcomes as well as greater productivity. The consumer society and the open society are not quite synonymous. Capitalism and democracy have a relationship, but it is something less than a marriage.” Read more Consider surveillance. Numerous countries around the world, including several democracies, are busy building unprecedented systems of surveillance. For example, Israel is a leader in the field of surveillance technology, and has created in the occupied West Bank a working prototype for a total-surveillance regime.
Already today whenever Palestinians make a phone call, post something on Facebook, or travel from one city to another, they are likely to be monitored by Israeli microphones, cameras, drones, or spy software. Algorithms analyze the gathered data, helping the Israeli security forces pinpoint and neutralize what they consider to be potential threats. The Palestinians may administer some towns and villages in the West Bank, but the Israelis command the sky, the airwaves, and cyberspace. It therefore takes surprisingly few Israeli soldiers to effectively control the roughly 2.5 million Palestinians who live in the West Bank.
In one incident in October 2017, a Palestinian laborer posted to his private Facebook account a picture of himself in his workplace, alongside a bulldozer. Adjacent to the image he wrote, “Good morning!” A Facebook translation algorithm made a small error when transliterating the Arabic letters. Instead of Ysabechhum (which means “Good morning”), the algorithm identified the letters as Ydbachhum (which means “Hurt them”). Suspecting that the man might be a terrorist intending to use a bulldozer to run people over, Israeli security forces swiftly arrested him.
They released him after they realized that the algorithm had made a mistake. Even so, the offending Facebook post was taken down—you can never be too careful. What Palestinians are experiencing today in the West Bank may be just a primitive preview of what billions of people will eventually experience all over the planet.
Imagine, for instance, that the current regime in North Korea gained a more advanced version of this sort of technology in the future. North Koreans might be required to wear a biometric bracelet that monitors everything they do and say, as well as their blood pressure and brain activity. Using the growing understanding of the human brain and drawing on the immense powers of machine learning, the North Korean government might eventually be able to gauge what each and every citizen is thinking at each and every moment. If a North Korean looked at a picture of Kim Jong Un and the biometric sensors picked up telltale signs of anger (higher blood pressure, increased activity in the amygdala), that person could be in the gulag the next day.
And yet such hard-edged tactics may not prove necessary, at least much of the time. A facade of free choice and free voting may remain in place in some countries, even as the public exerts less and less actual control. To be sure, attempts to manipulate voters’ feelings are not new. But once somebody (whether in San Francisco or Beijing or Moscow) gains the technological ability to manipulate the human heart—reliably, cheaply, and at scale—democratic politics will mutate into an emotional puppet show.
We are unlikely to face a rebellion of sentient machines in the coming decades, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability, at the behest of a human elite, to try to sell us something—be it a car, a politician, or an entire ideology. The bots might identify our deepest fears, hatreds, and cravings and use them against us. We have already been given a foretaste of this in recent elections and referendums across the world, when hackers learned how to manipulate individual voters by analyzing data about them and exploiting their prejudices. While science-fiction thrillers are drawn to dramatic apocalypses of fire and smoke, in reality we may be facing a banal apocalypse by clicking.
T he biggest and most frightening impact of the AI revolution might be on the relative efficiency of democracies and dictatorships. Historically, autocracies have faced crippling handicaps in regard to innovation and economic growth. In the late 20th century, democracies usually outperformed dictatorships, because they were far better at processing information. We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all available information fast enough and make the right decisions. This is one reason the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy.
Recommended Reading How the Enlightenment Ends Henry A. Kissinger Is Google Making Us Stupid? Nicholas Carr An Artificial Intelligence Developed Its Own Non-Human Language Adrienne LaFrance However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you’ll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. An authoritarian government that orders all its citizens to have their DNA sequenced and to share their medical data with some central authority would gain an immense advantage in genetics and medical research over societies in which medical data are strictly private. The main handicap of authoritarian regimes in the 20th century—the desire to concentrate all information and power in one place—may become their decisive advantage in the 21st century.
New technologies will continue to emerge, of course, and some of them may encourage the distribution rather than the concentration of information and power. Blockchain technology, and the use of cryptocurrencies enabled by it, is currently touted as a possible counterweight to centralized power. But blockchain technology is still in the embryonic stage, and we don’t yet know whether it will indeed counterbalance the centralizing tendencies of AI. Remember that the Internet, too, was hyped in its early days as a libertarian panacea that would free people from all centralized systems—but is now poised to make centralized authority more powerful than ever.
IV. The Transfer of Authority to Machines E ven if some societies remain ostensibly democratic, the increasing efficiency of algorithms will still shift more and more authority from individual humans to networked machines. We might willingly give up more and more authority over our lives because we will learn from experience to trust the algorithms more than our own feelings, eventually losing our ability to make many decisions for ourselves. Just think of the way that, within a mere two decades, billions of people have come to entrust Google’s search algorithm with one of the most important tasks of all: finding relevant and trustworthy information. As we rely more on Google for answers , our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space. People ask Google not just to find information but also to guide them around. Self-driving cars and AI physicians would represent further erosion: While these innovations would put truckers and human doctors out of work, their larger import lies in the continuing transfer of authority and responsibility to machines.
Humans are used to thinking about life as a drama of decision making. Liberal democracy and free-market capitalism see the individual as an autonomous agent constantly making choices about the world. Works of art—be they Shakespeare plays, Jane Austen novels, or cheesy Hollywood comedies—usually revolve around the hero having to make some crucial decision. To be or not to be? To listen to my wife and kill King Duncan, or listen to my conscience and spare him? To marry Mr. Collins or Mr. Darcy? Christian and Muslim theology similarly focus on the drama of decision making, arguing that everlasting salvation depends on making the right choice.
What will happen to this view of life as we rely on AI to make ever more decisions for us? Even now we trust Netflix to recommend movies and Spotify to pick music we’ll like. But why should AI’s helpfulness stop there? Every year millions of college students need to decide what to study. This is a very important and difficult decision, made under pressure from parents, friends, and professors who have varying interests and opinions. It is also influenced by students’ own individual fears and fantasies, which are themselves shaped by movies, novels, and advertising campaigns. Complicating matters, a given student does not really know what it takes to succeed in a given profession, and doesn’t necessarily have a realistic sense of his or her own strengths and weaknesses.
It’s not so hard to see how AI could one day make better decisions than we do about careers, and perhaps even about relationships. But once we begin to count on AI to decide what to study, where to work, and whom to date or even marry, human life will cease to be a drama of decision making, and our conception of life will need to change. Democratic elections and free markets might cease to make sense. So might most religions and works of art. Imagine Anna Karenina taking out her smartphone and asking Siri whether she should stay married to Karenin or elope with the dashing Count Vronsky. Or imagine your favorite Shakespeare play with all the crucial decisions made by a Google algorithm. Hamlet and Macbeth would have much more comfortable lives, but what kind of lives would those be? Do we have models for making sense of such lives? C an parliaments and political parties overcome these challenges and forestall the darker scenarios? At the current moment this does not seem likely. Technological disruption is not even a leading item on the political agenda. During the 2016 U.S. presidential race, the main reference to disruptive technology concerned Hillary Clinton’s email debacle, and despite all the talk about job loss, neither candidate directly addressed the potential impact of automation. Donald Trump warned voters that Mexicans would take their jobs, and that the U.S. should therefore build a wall on its southern border. He never warned voters that algorithms would take their jobs, nor did he suggest building a firewall around California.
So what should we do? For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated. If we invest too much in AI and too little in developing the human mind, the very sophisticated artificial intelligence of computers might serve only to empower the natural stupidity of humans, and to nurture our worst (but also, perhaps, most powerful) impulses, among them greed and hatred. To avoid such an outcome, for every dollar and every minute we invest in improving AI, we would be wise to invest a dollar and a minute in exploring and developing human consciousness.
More practically, and more immediately, if we want to prevent the concentration of all wealth and power in the hands of a small elite, we must regulate the ownership of data. In ancient times, land was the most important asset, so politics was a struggle to control land. In the modern era, machines and factories became more important than land, so political struggles focused on controlling these vital means of production. In the 21st century, data will eclipse both land and machinery as the most important asset, so politics will be a struggle to control data’s flow.
Unfortunately, we don’t have much experience in regulating the ownership of data, which is inherently a far more difficult task than regulating land or machines. Data are everywhere and nowhere at the same time, they can move at the speed of light, and you can create as many copies of them as you want. Do the data collected about my DNA, my brain, and my life belong to me, or to the government, or to a corporation, or to the human collective? The race to accumulate data is already on, and is currently headed by giants such as Google and Facebook and, in China, Baidu and Tencent. So far, many of these companies have acted as “attention merchants”—they capture our attention by providing us with free information, services, and entertainment, and then they resell our attention to advertisers. Yet their true business isn’t merely selling ads. Rather, by capturing our attention they manage to accumulate immense amounts of data about us, which are worth more than any advertising revenue. We aren’t their customers—we are their product.
Ordinary people will find it very difficult to resist this process. At present, many of us are happy to give away our most valuable asset—our personal data—in exchange for free email services and funny cat videos. But if, later on, ordinary people decide to try to block the flow of data, they are likely to have trouble doing so, especially as they may have come to rely on the network to help them make decisions, and even for their health and physical survival.
Nationalization of data by governments could offer one solution; it would certainly curb the power of big corporations. But history suggests that we are not necessarily better off in the hands of overmighty governments. So we had better call upon our scientists, our philosophers, our lawyers, and even our poets to turn their attention to this big question: How do you regulate the ownership of data? Currently, humans risk becoming similar to domesticated animals. We have bred docile cows that produce enormous amounts of milk but are otherwise far inferior to their wild ancestors. They are less agile, less curious, and less resourceful. We are now creating tame humans who produce enormous amounts of data and function as efficient chips in a huge data-processing mechanism, but they hardly maximize their human potential. If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world.
If you find these prospects alarming—if you dislike the idea of living in a digital dictatorship or some similarly degraded form of society—then the most important contribution you can make is to find ways to prevent too much data from being concentrated in too few hands, and also find ways to keep distributed data processing more efficient than centralized data processing. These will not be easy tasks. But achieving them may be the best safeguard of democracy.
This article has been adapted from Yuval Noah Harari’s book, 21 Lessons for the 21st Century.
" |
283 | 2,019 | "OpenAI Five defeats professional Dota 2 team, twice | VentureBeat" | "https://venturebeat.com/ai/openai-five-defeats-a-team-of-professional-dota-2-players" | "Artificial Intelligence View All AI, ML and Deep Learning Auto ML Data Labelling Synthetic Data Conversational AI NLP Text-to-Speech Security View All Data Security and Privacy Network Security and Privacy Software Security Computer Hardware Security Cloud and Data Storage Security Data Infrastructure View All Data Science Data Management Data Storage and Cloud Big Data and Analytics Data Networks Automation View All Industrial Automation Business Process Automation Development Automation Robotic Process Automation Test Automation Enterprise Analytics View All Business Intelligence Disaster Recovery Business Continuity Statistical Analysis Predictive Analysis More Data Decision Makers Virtual Communication Team Collaboration UCaaS Virtual Reality Collaboration Virtual Employee Experience Programming & Development Product Development Application Development Test Management Development Languages OpenAI Five defeats professional Dota 2 team, twice Share on Facebook Share on X Share on LinkedIn OpenAI's Dota 2 battle arena.
Are you looking to showcase your brand in front of the brightest minds of the gaming industry? Consider getting a custom GamesBeat sponsorship.
Learn more.
OpenAI , a San Francisco-based nonprofit AI research organization backed by tech luminaries Reid Hoffman and Peter Thiel, has investigated autonomous systems that can achieve superhuman performance in Pong and Montezuma’s Revenge — not to mention natural language systems capable of impressive coherency. But it’s also spent the better part of four years developing AI capable of human-level play in Valve’s Dota 2 battle arena game, and it today set the fruit of its labor loose on a team of professional players.
At a packed event in San Francisco, OpenAI Five (OpenAI’s autonomous system) competed against Europe’s OG — an esports collective that became the first win four Dota Major Championships in 2017 — in a series of rounds commentated on by players William “Blitz” Lee, Austin “Capitalist” Walsh, Owen “ODPixel” Davies, Kevin “Purge” Godec, and Jorien “Sheever” van der Heijden. The stakes were somewhat higher than OpenAI’s previous matches; in a best-of-three match at Valve’s The International 2018 esports competition (where prizes totaled $25 million), two teams of pro gamers overcame OpenAI Five.
This time around, the bots won the first two matches of three in a Captain’s Draft mode, which let each team ban characters to prevent the other from selecting them. In the second match, OpenAI Five emerged victorious after about 20 minutes — roughly half the first game’s length.
The rules were the same as those last summer, at The International: The bots didn’t have invulnerable couriers (NPCs that deliver items to heroes), which in earlier rounds they used to ferry a stream of healing potions to their player characters. OpenAI also played on the latest Dota 2 patch, and with summoning and illusion features disabled. Still, it benefited both from a “more fluid” training process and 8 times more training compute; according to OpenAI cofounder and chairman Greg Brockman, it now has a collective 45,000 years of Dota 2 gameplay experience under its belt.
Event GamesBeat at the Game Awards We invite you to join us in LA for GamesBeat at the Game Awards event this December 7. Reserve your spot now as space is limited! Historically, an absence of long-term planning has been OpenAI Five’s Achilles’ Heel — it’s often emphasized short-term payoffs as opposed to long-term rewards. Dota 2 games generally last 30 to 45 minutes, and OpenAI says its AI agents have a “reward half-life” — the cooldown time between future payoffs — of 14 minutes. Another of the bot’s disadvantages? It doesn’t learn between games.
OpenAI preferred to defend its towers in today’s matches, although it occasionally brought over a hero to strike proactively. It made a few misplays, like directing one of its player characters — Death Prophet — to use its ultimate skill against an enemy hero, Riki, after which the latter went invisible and retreated. But it demonstrated a knack for “juggling” — that is, killing creatures away from the main action (despite the fact that it strayed away from resource gathering, attacking towers, and getting objectives). Moreover, it directed heroes to walk away in situations where damage-over-time was likely to kill them, constantly flickering in and out of invisibility to avoid being killed, and spent in-game currency to restore heroes’ health meters.
“OG played extremely weirdly the entire time, and we saw sometimes it worked, and sometimes it really, really didn’t,” Royal Academy of Engineering research fellow Mike Cook wrote on Twitter. “I’m not sure what to make of the new bots … They’re clearly very different … But I also feel like OG’s draft and play was very different to what we’ve seen from human teams facing them before.” At the conclusion of today’s match, OpenAI announced that it’ll release a platform for the public to play against OpenAI Five — a mode called Arena — starting April 18 and ending April 21.
How OpenAI tackled Dota 2 Valve’s Dota 2 — a follow-up to Defense of the Ancients (DotA), a community-created mod for Blizzard’s Warcraft III: Reign of Chaos — is what’s known as a multiplayer online battle arena, or MOBA. Two groups of five players, each of which are given a base to occupy and defend, attempt to destroy a structure — the Ancient — at the opposing team’s base. Player characters (heroes) have a distinct set of abilities, and collect experience points and items which unlock new attacks and defensive moves.
It’s more complex than it sounds. The average match contains 80,000 individual frames, during which each character can perform dozens of 170,000 possible actions. Heroes on the board finish an average of 10,000 moves each frame, contributing to the game’s more than 20,000 total dimensions. And each of those heroes — of which there are over 100 — can pick up or purchase hundreds of in-game items.
OpenAI Five isn’t able to handle the full game yet — it can only play 18 out of the 115 different heroes, and it can’t use abilities like summons and illusions. And in a somewhat controversial design decision, OpenAI’s engineers opted not to have it read pixels from the game to retrieve information (like human players). It uses Dota 2’s bot API instead, obviating the need for it to search the map to check where its team might be, check if a spell is ready, or estimate an enemy’s health or distance.
That said, it’s able to draft a team entirely on its own that takes into account the opposing side’s choices.
OpenAI’s been chipping away at the Dota 2 dilemma for a while now, and demoed an early iteration of its MOBA-playing bot — one which beat one of the world’s top players, Danil “Dendi” Ishutin, in a 1-on-1 match — in August 2017. It kicked things up a notch in June with OpenAI Five, an improved system capable of playing five-on-five matches that managed to beat a team of OpenAI employees, a team of audience members, a Valve employee team, an amateur team, and a semi-pro team.
Above: OpenAI Five’s view from the Dota 2 battlefield.
In early August, it won two out of three matches against a team ranked in the 99.95th percentile. During the first of the two matches, Open AI Five started and finished strongly, preventing its human opponents from destroying any of its defensive towers. The second match was a tad less one-sided — the humans took out one of OpenAI Five’s towers — but the AI emerged victorious nonetheless. Only in the third match did the human players eke out a victory.
OpenAI Five consists of five single-layer, 4,096-unit long short-term memory (LSTM) networks — a type of recurrent neural network that can “remember” values over an arbitrary length of time — each assigned to a single hero. (That’s up from 1024-unit LSTMs in previous versions.) The networks are trained using a deep reinforcement learning model that incentivizes their self-improvement with rewards. In OpenAI Five’s case, those rewards are kills, deaths, assists, last mile hits, net worth, and other stats that track progress in Dota 2.
OpenAI’s training framework — Rapid — consists of two parts: a set of rollout workers that run a copy of Dota 2 and an LSTM network, and optimizer nodes that perform synchronous gradient descent (an essential step in machine learning) across a fleet of graphics cards. As the rollout workers gain experience, they inform the optimizer nodes, and another set of workers compare the trained LSTM networks (agents) to reference agents.
To self-improve, OpenAI Five plays 180 years’ worth of games every day — 80% against itself and 20% against past selves — on 256 Nvidia Tesla P100 graphics cards and 128,000 processor cores on Google’s Cloud Platform. Months ago, when OpenAI kicked off training, the AI-controlled Dota 2 heroes “walked aimlessly around the map.” But it wasn’t long before the AI mastered basics like lane defense in farming, and soon after nailed advanced strategies like rotating heroes around the map and stealing items from opponents.
“People used to think that this kind of thing was impossible using today’s deep learning,” Brockman told VentureBeat in an interview last year. “But it turns out that these networks [are] able to play at the professional level in terms of some of the strategies they discover … and really do some long-term planning. The shocking thing to me is that it’s using algorithms that are already here, that we already have, that people said were flawed in very specific ways.” Fully trained OpenAI Five agents are surprisingly sophisticated. Despite being unable to communicate with each other (a “team spirit” hyperparameter value determines how much or how little each agent prioritizes individual rewards over the team’s reward), they’re masters of projectile avoidance and experience points sharing, and even of advanced tactics like “creep blocking,” in which a hero physically blocks the path of a hostile creep (a basic unit in Dota 2) to slow their progress.
Dota 2 players are already studying OpenAI Five’s styles of play , some of which are surprisingly creative. (In one match, the bots adopted a mechanic that allowed their heroes to quickly recharge a certain weapon by staying out of range of enemies.) As for OpenAI, it’s applying some of the insights gleaned from to other fields: Last February, it released Hindsight Experience Replay (HER), an open source algorithm that effectively helps robots to learn from failure, and later in the year published research on a self-learning robotics system that can manipulate objects with humanlike dexterity.
Brockman said that while today’s match was the final public demonstration, OpenAI will “continue to work” on OpenAI Five.
“The beauty of this technology is that it doesn’t even know it’s [playing] Dota … It’s about letting people connect the strange, exotic but still very tangible intelligences that are created … modern AI technology,” he said. “Games have really been the benchmark [in AI research] … These complex strategy games are the milestone that we … have all been working towards because they start to capture aspects of the real world.” GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
Discover our Briefings.
The AI Impact Tour Join us for an evening full of networking and insights at VentureBeat's AI Impact Tour, coming to San Francisco, New York, and Los Angeles! VentureBeat Homepage Follow us on Facebook Follow us on X Follow us on LinkedIn Follow us on RSS Press Releases Contact Us Advertise Share a News Tip Contribute to DataDecisionMakers Careers Privacy Policy Terms of Service Do Not Sell My Personal Information © 2023 VentureBeat.
All rights reserved.
" |
284 | 2,022 | "ChatGPT Will End High-School English - The Atlantic" | "https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
More From Artificial Intelligence More From Artificial Intelligence The Sudden Fall of Sam Altman Ross Andersen The AI Debate Is Happening in a Cocoon Amba Kak Sarah Myers West My North Star for the Future of AI Fei-Fei Li AI Search Is Turning Into the Problem Everyone Worried About Caroline Mimbs Nyce The End of High-School English I’ve been teaching English for 12 years, and I’m astounded by what ChatGPT can produce.
This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic , Monday through Friday.
Sign up for it here.
Teenagers have always found ways around doing the hard work of actual learning. CliffsNotes dates back to the 1950s, “No Fear Shakespeare” puts the playwright into modern English, YouTube offers literary analysis and historical explication from numerous amateurs and professionals, and so on. For as long as those shortcuts have existed, however, one big part of education has remained inescapable: writing. Barring outright plagiarism, students have always arrived at that moment when they’re on their own with a blank page, staring down a blinking cursor, the essay waiting to be written.
Now that might be about to change. The arrival of OpenAI’s ChatGPT, a program that generates sophisticated text in response to any prompt you can imagine, may signal the end of writing assignments altogether—and maybe even the end of writing as a gatekeeper, a metric for intelligence, a teachable skill.
If you’re looking for historical analogues, this would be like the printing press, the steam drill, and the light bulb having a baby, and that baby having access to the entire corpus of human knowledge and understanding. My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change.
I teach a variety of humanities classes (literature, philosophy, religion, history) at a small independent high school in the San Francisco Bay Area. My classes tend to have about 15 students, their ages ranging from 16 to 18. This semester I am lucky enough to be teaching writers like James Baldwin, Gloria Anzaldúa, Herman Melville, Mohsin Hamid, Virginia Held. I recognize that it’s a privilege to have relatively small classes that can explore material like this at all. But at the end of the day, kids are always kids. I’m sure you will be absolutely shocked to hear that not all teenagers are, in fact, so interested in having their mind lit on fire by Anzaldúa’s radical ideas about transcending binaries, or Ishmael’s metaphysics in Moby-Dick.
To those students, I have always said: You may not be interested in poetry or civics, but no matter what you end up doing with your life, a basic competence in writing is an absolutely essential skill—whether it’s for college admissions, writing a cover letter when applying for a job, or just writing an email to your boss.
Read: The college essay is dead I’ve also long held, for those who are interested in writing, that you need to learn the basic rules of good writing before you can start breaking them—that, like Picasso, you have to learn how to reliably fulfill an audience’s expectations before you get to start putting eyeballs in people’s ears and things.
I don’t know if either of those things is true anymore. It’s no longer obvious to me that my teenagers actually will need to develop this basic skill, or if the logic still holds that the fundamentals are necessary for experimentation.
Let me be candid (with apologies to all of my current and former students): What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor. Over the past few days, I’ve given it a number of different prompts. And even if the bot’s results don’t exactly give you goosebumps, they do a more-than-adequate job of fulfilling a task.
I mean, look at this: I asked the program to write me a playful, sophisticated, emotional 600-word college-admissions essay about how my experience volunteering at my local SPCA had prepared me for the academic rigor of Stanford. Here’s an excerpt from its response: In addition to cleaning, I also had the opportunity to interact with the animals. I was amazed at the transformation I saw in some of the pets who had been neglected or abused. With patience and care, they blossomed into playful and affectionate companions who were eager to give and receive love. I was also able to witness firsthand the process of selecting the right pet for the right family. Although it was bittersweet to see some animals leave the shelter, I knew that they were going to a loving home, and that was the best thing for them.
It also managed to compose a convincing 400-word “friendly” cover letter for an application to be a manager at Starbucks. But most jaw-dropping of all, on a personal level: It made quick work out of an assignment I’ve always considered absolutely “unhackable.” In January, my junior English students will begin writing an independent research paper, 12 to 18 pages, on two great literary works of their own choosing—a tradition at our school. Their goal is to place the texts in conversation with each other and find a thread that connects them. Some students will struggle to find any way to bring them together. We spend two months on the paper, putting it together piece by piece.
I’ve fed GPT a handful of pairs that students have worked with in recent years: Beloved and Hamlet , The Handmaid’s Tale and The Parable of the Sower , Homer’s The Odyssey and Dante’s Inferno.
GPT brought them together instantly, effortlessly, uncannily: memory, guilt, revenge, justice, the individual versus the collective, freedom of choice, societal oppression.
The technology doesn’t go much beyond the surface, nor does it successfully integrate quotations from the original texts, but the ideas presented were on-target—more than enough to get any student rolling without much legwork.
It goes further. Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic.
I’ve been teaching for about 12 years: first as a TA in grad school, then as an adjunct professor at various public and private universities, and finally in high school. From my experience, American high-school students can be roughly split into three categories. The bottom group is learning to master grammar rules, punctuation, basic comprehension, and legibility. The middle group mostly has that stuff down and is working on argument and organization—arranging sentences within paragraphs and paragraphs within an essay. Then there’s a third group that has the luxury of focusing on things such as tone, rhythm, variety, mellifluence.
Whether someone is writing a five-paragraph essay or a 500-page book, these are the building blocks not only of good writing but of writing as a tool, as a means of efficiently and effectively communicating information. And because learning writing is an iterative process, students spend countless hours developing the skill in elementary school, middle school, high school, and then finally (as thousands of underpaid adjuncts teaching freshman comp will attest) college. Many students (as those same adjuncts will attest) remain in the bottom group, despite their teachers’ efforts; most of the rest find some uneasy equilibrium in the second category.
Working with these students makes up a large percentage of every English teacher’s job. It also supports a cottage industry of professional development, trademarked methods buried in acronyms ( ICE ! PIE ! EDIT ! MEAT !), and private writing tutors charging $100-plus an hour. So for those observers who are saying, Well, good, all of these things are overdue for change —“this will lead to much-needed education reform,” a former colleague told me—this dismissal elides the heavy toll this sudden transformation is going to take on education, extending along its many tentacles (standardized testing, admissions, educational software, etc.).
Perhaps there are reasons for optimism, if you push all this aside. Maybe every student is now immediately launched into that third category: The rudiments of writing will be considered a given, and every student will have direct access to the finer aspects of the enterprise. Whatever is inimitable within them can be made conspicuous, freed from the troublesome mechanics of comma splices, subject-verb disagreement, and dangling modifiers.
But again, the majority of students do not see writing as a worthwhile skill to cultivate—just like I, sitting with my coffee and book , rereading Moby-Dick , do not consider it worthwhile to learn, say, video editing. They have no interest in exploring nuance in tone and rhythm; they will forever roll their eyes at me when I try to communicate the subtle difference, when writing an appositive phrase, between using commas, parentheses, or (the connoisseur’s choice) the em dash.
Which is why I wonder if this may be the end of using writing as a benchmark for aptitude and intelligence. After all, what is a cover letter? Its primary purpose isn’t to communicate “I already know how to do this job” (because of course I don’t) but rather “I am competent and trustworthy and can clearly express to you why I would be a good candidate for this job.” What is a written exam? Its primary signal isn’t “I memorized a bunch of information” but rather “I can express that information clearly in writing.” Many teachers have reacted to ChatGPT by imagining how to give writing assignments now—maybe they should be written out by hand, or given only in class—but that seems to me shortsighted. The question isn’t “How will we get around this?” but rather “Is this still worth doing?” I believe my most essential tasks, as a teacher, are helping my students think critically, disagree respectfully, argue carefully and flexibly, and understand their mind and the world around them. Unconventional, improvisatory, expressive, meta-cognitive writing can be an extraordinary vehicle for those things. But if most contemporary writing pedagogy is necessarily focused on helping students master the basics, what happens when a computer can do it for us? Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion? Maybe the moment when I felt most caught between these two thoughts was when I asked GPT a question from an exam I give in my Eastern-philosophy course, something I’ve never imagined a student could find outside assistance to answer: “Explain the Madhyamaka Buddhist idea that all phenomena lack inherent existence, using a funny, specific example.” ChatGPT wrote back, as ever, instantly: The Madhyamaka Buddhist idea that all phenomena lack inherent existence can be illustrated with a funny example: let’s say you have a pet hamster. You might think that your hamster has its own inherent existence, but in reality, your hamster is made up of many different elements such as its fur, its bones, its muscles, and its brain. None of these elements exist independently or inherently on their own, so your hamster itself does not have any inherent existence either. The same goes for all phenomena—everything is made up of various components that don’t have inherent existence, so nothing has any inherent existence either.
Everything is made up; it’s true. The essay as a literary form? Made up. Grammatical rules as markers of intelligence? Writing itself as a technology? Made up. Starting now, OpenAI is forcing us to ask foundational questions about whether any of those things are worth keeping around.
" |
285 | 2,018 | "OpenAI supporters" | "https://openai.com/blog/openai-supporters" | "Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Blog OpenAI supporters We’re excited to welcome new donors to OpenAI.
Illustration: Justin Jay Wang × DALL·E February 20, 2018 Announcements We’re excited to welcome the following new donors to OpenAI: Jed McCaleb , Gabe Newell , Michael Seibel , Jaan Tallinn , and Ashton Eaton and Brianne Theisen-Eaton.
Reid Hoffman is significantly increasing his contribution.
Pieter Abbeel (having completed his sabbatical with us), Julia Galef , and Maran Nelson are becoming advisors to OpenAI. Additionally, Elon Musk will depart the OpenAI Board but will continue to donate and advise the organization. As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon.
We’re broadening our base of funders to prepare for the next phase of OpenAI, which will involve ramping up our investments in our people and the compute resources necessary to make consequential breakthroughs in artificial intelligence. OpenAI was founded a little over two years ago and since that time we’ve paired our research efforts with applied work to push the limits of what AI systems are capable of via our work in robotics and Dota.
That’s going to continue, and in the coming months you can also expect us to articulate the principles with which we’ll be approaching the next phase of OpenAI, and the policy areas in which we wish to see changes to ensure AI benefits all of humanity.
The Board is now Greg Brockman, Ilya Sutskever, Holden Karnofsky, and Sam Altman. We will add another director soon, and plan over time to further expand the Board. If you’re interested in working with us on this mission, consider joining OpenAI.
Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
" |
286 | 2,018 | "OpenAI Scholars" | "https://openai.com/blog/openai-scholars" | "Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Blog OpenAI Scholars We’re providing 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
Photo: Blake Tucker March 6, 2018 Authors Larissa Schiavo Greg Brockman Announcements , Culture & Careers We’re providing 6–10 stipends and mentorship to individuals from underrepresented groups to study deep learning full-time for 3 months and open-source a project.
Apply This is a remote program and is open to anyone with US work authorization located in US timezones (we’re happy to provide a desk in our San Francisco office if you happen to be located in the Bay Area). In return, we ask that you document your experiences studying deep learning and hopefully inspire others to do the same.
Why we’re doing this Diversity is core to AI having a positive effect on the world—it’s necessary to ensure the advanced AI systems in the future are built to benefit everyone. While we hope that some of the scholars will join OpenAI (where we are actively working on internal diversity & inclusion initiatives), we want this program to improve diversity in the field at large.
Once you’ve decided to join the field, there are many programs (such as our Fellowship or any number of residencies ) which can help you develop your skills. But these require a longer commitment and some existing machine learning experience, and for many people with families or other obligations it’s not possible to simply pack up and come to the Bay Area.
Resources we’ll provide We’ll provide you with a $7.5k/mo stipend for 3 months from June 4, 2018 to August 31, 2018.
Each scholar will receive $25,000 worth of credits from Amazon Web Services.
You’ll have a mentor who will provide at least an hour of mentorship via video call each week, answer your questions via chat/email, and work with you to design and execute a good project to stretch your skills.
There will be a group Slack with the scholars and mentors. If you’re in the Bay Area, we’ll optionally provide a desk for you at the OpenAI office.
Meet the mentors We’ve lined up the following mentors from OpenAI and the community. AWS is donating $25,000 worth of credits to each community mentor.
Expectations You should be studying deep learning full-time during the 3 months.
You should write a weekly blog post updating the community on your progress: describe what you learned during the week, what materials you found useful, what questions you find yourself asking, etc.
You should complete and open source a project by the end of the program.
What we’re looking for You are eligible to apply if: You are a member of an underrepresented group in science and engineering.
You have US work authorization and are located in a US timezone.
You understand this article on calculus and this article on linear algebra.
It’s fine if you have to brush up on these skills.
You are comfortable programming in Python (other languages are helpful, but you’ll spend the program writing in Python).
We’re open to all experience levels and backgrounds that meet the above criteria—it’s a common myth that you need a PhD to work in AI (many OpenAI employees don’t have one).
We’ll use these criteria for selection: Impact on you.
We want to understand why this grant and mentorship will help you achieve something you couldn’t otherwise.
Self-motivation & communication.
We’re looking for people who will work hard through those three months, and who will inspire others (in the program and externally) to endeavor to learn deep learning as well.
Technical skills.
The stronger your technical background, the more time you’ll spend focusing on the deep learning itself.
Questions? Email [email protected].
Applications are open starting immediately, and starting March 14th we will begin reviewing applications, contacting people for follow-up, and filling positions on a rolling basis. Applications will close no later than 11:59pm PT on March 31st, with decision sent no later than April 16th.
Apply now! Update: Applications for the Summer 2018 Scholars cohort are now closed. We will be reaching out to applicants regarding their admissions status by April 16th.
Authors Larissa Schiavo View all articles Greg Brockman View all articles Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
" |
287 | 2,019 | "OpenAI Fellows Winter 2019 & Interns Summer 2019" | "https://openai.com/blog/openai-fellows-interns-2019" | "Close Search Skip to main content Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Search Navigation quick links Log in Try ChatGPT Menu Mobile Navigation Close Site Navigation Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Safety Company About Blog Careers Residency Charter Security Customer stories Quick Links Log in Try ChatGPT Search Blog OpenAI Fellows Winter 2019 & Interns Summer 2019 We are now accepting applications for OpenAI Fellows and Interns for 2019.
Illustration: Ruby Chen October 9, 2018 Authors Larissa Schiavo Ashley Pilipiszyn Announcements , Culture & Careers Fellows and Intern programs provide an opportunity for people to work at OpenAI who are currently studying AI or wanting to transition from another speciality into AI.
Applications for the Fellows Winter 2019 are now closed—please check back later in 2019 to apply for our next cohort.
Fellows OpenAI Fellows for the February 2019 Cohort will spend the first 2 months of this program working through a specially-curated curriculum written by OpenAI’s researchers and writing a research proposal based on their interests. Fellows will then work on the project outlined in their research proposal for the following 4 months with guidance from their OpenAI mentor. This 6-month program is specifically designed for people who want to transition into conducting artificial intelligence research and apply their current domain expertise/skills.
What we’re looking for Diverse backgrounds.
Previous fellows have come from various backgrounds spanning across genetics, software engineering, physics, and theoretical computer science.
Dedication.
We will give priority to applicants that can join OpenAI full-time following the program.
Passion for Research.
We want to engage with researchers and scientists who are motivated to dive into AI research and have previous research project experience.
Timeframe 10/9/2018 Applications Open 11/5/2018 Applications Close 1/3/2019 Notify all applicants 2/4/2019 Cohort Starts 8/2/2019 Cohort Ends Interns OpenAI interns for the Summer 2019 Cohort will work with an OpenAI team and contribute to OpenAI’s research over the course of 3 months, starting May 2019. Our interns contribute to large-scale projects like our work on Robotics and conduct their own research into AI. To get a sense of what sort of projects people work on, please check out some of the presentations from our 2018 Intern Open House.
What we’re looking for Self-Direction.
We want to work with interns who have demonstrated an ability to guide themselves in solo work as well as contribute as a part of a team.
Practical Skills.
While research is important, we also really value interns who are great at implementing their ideas quickly, working with a shared codebase, and communicating changes to their work with their team.
Technical & Research Experience.
We value interns with a strong body of scientific work (especially single-author papers) as well as engineering backgrounds that can contribute to our current AI research efforts.
Timeframe We offer two decision periods: Early Decision and General Admission.
Early Decision is designed to give candidates the opportunity to apply in Fall 2018 and secure an internship for Summer 2019 by the end of 2018. If you apply by November 2nd, 2018, we will notify you of your admissions status by December 21st, 2018. The application timeframe and deadlines below apply to both international and US-based applicants.
10/9/2018 Open Summer 2019 Early Decision applications (Phone Interviews & On-Site Interviews during this time) 11/2/2018 Close Summer 2019 Early Decision Applications 12/21/2018 Notify Early Decision Applicants General Admission allows candidates to apply later in the year for a Summer 2019 internship, with rolling admissions. We may close applications for Summer 2019 early if we reach capacity earlier than anticipated, and will update the job post when that happens.
We are accepting applications from candidates without US work authorization from November 3rd, 2018 to February 15th, 2019, and will notify international applicants of their admissions status by March 15th, 2019.
We are accepting applications from candidates with US work authorization from November 3rd, 2018 to March 15th, 2019, and will notify these applicants of their admissions status by April 15th, 2019 at the latest.
If you apply after March 15th, 2019, we will consider you for future summer internships, or fall/winter internships (in exceptional cases).
11/3/2019 General Admission Applications Open (US & Int’l) 2/15/2019 International General Admission Applications Close 3/15/2019 Notify International General Admissions applicants 3/15/2019 General Admission (US) Close 4/15/2019 Notify US General Admission applicants Selection criteria FAQ Fellows You want to transition into conducting artificial intelligence research You can demonstrate your interest in AI research via past projects or evidence of significant self-study (If you’re in the middle of a degree program, please apply for an internship instead.) Questions—email [email protected] Interns You are in your final year of your PhD or undergraduate degree and are available to work within a year of completing your internship.
You have a strong body of scientific work, especially first-author papers.
You have significant open-source contributions to the ML community You have strong engineering skills, with a primary interest in research.
You are available for a minimum of 3 months, starting in May 2019.
Questions—email [email protected] Authors Larissa Schiavo View all articles Ashley Pilipiszyn View all articles Research Overview Index GPT-4 DALL·E 3 API Overview Data privacy Pricing Docs ChatGPT Overview Enterprise Try ChatGPT Company About Blog Careers Charter Security Customer stories Safety OpenAI © 2015 – 2023 Terms & policies Privacy policy Brand guidelines Social Twitter YouTube GitHub SoundCloud LinkedIn Back to top
" |
288 | 2,019 | "Jeff Bezos’s Master Plan - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2019/11/what-jeff-bezos-wants/598363" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore The Tech Issue: Jeff Bezos’s master plan, when GoFundMe gets ugly, and why the world is getting louder. Plus Mark Bowden on what military generals think of Trump, Jack Goldsmith’s family and government surveillance, Sandra Boynton, baseball cards, why you never see your friends, and more.
Jeff Bezos’s Master Plan Franklin Foer Top Military Officers Unload on Trump Mark Bowden Why Everything Is Getting Louder Bianca Bosker When GoFundMe Gets Ugly Rachel Monroe My Family Story of Love, the Mob, and Government Surveillance Jack Goldsmith Why You Never See Your Friends Anymore Judith Shulevitz A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
Jeff Bezos’s Master Plan What the Amazon founder and CEO wants for his empire and himself, and what that means for the rest of us.
1.0 Where in the pantheon of American commercial titans does Jeffrey Bezos belong? Andrew Carnegie’s hearths forged the steel that became the skeleton of the railroad and the city. John D. Rockefeller refined 90 percent of American oil, which supplied the pre-electric nation with light. Bill Gates created a program that was considered a prerequisite for turning on a computer.
At 55, Bezos has never dominated a major market as thoroughly as any of these forebears, and while he is presently the richest man on the planet, he has less wealth than Gates did at his zenith. Yet Rockefeller largely contented himself with oil wells, pump stations, and railcars; Gates’s fortune depended on an operating system. The scope of the empire the founder and CEO of Amazon has built is wider. Indeed, it is without precedent in the long history of American capitalism.
Today, Bezos controls nearly 40 percent of all e-commerce in the United States. More product searches are conducted on Amazon than on Google, which has allowed Bezos to build an advertising business as valuable as the entirety of IBM. One estimate has Amazon Web Services controlling almost half of the cloud-computing industry—institutions as varied as General Electric, Unilever, and even the CIA rely on its servers. Forty-two percent of paper book sales and a third of the market for streaming video are controlled by the company; Twitch, its video platform popular among gamers, attracts 15 million users a day. Add The Washington Post to this portfolio and Bezos is, at a minimum, a rival to the likes of Disney’s Bob Iger or the suits at AT&T, and arguably the most powerful man in American culture.
I first grew concerned about Amazon’s power five years ago. I felt anxious about how the company bullied the book business, extracting ever more favorable terms from the publishers that had come to depend on it. When the conglomerate Hachette, with which I’d once published a book, refused to accede to Amazon’s demands, it was punished. Amazon delayed shipments of Hachette books; when consumers searched for some Hachette titles, it redirected them to similar books from other publishers. In 2014, I wrote a cover story for The New Republic with a pugilistic title: “ Amazon Must Be Stopped.
” Citing my article, the company subsequently terminated an advertising campaign for its political comedy, Alpha House , that had been running in the magazine.
Since that time, Bezos’s reach has only grown. To the U.S. president, he is a nemesis. To many Americans, he is a beneficent wizard of convenience and abundance. Over the course of just this past year, Amazon has announced the following endeavors: It will match potential home buyers with real-estate agents and integrate their new homes with Amazon devices; it will enable its voice assistant, Alexa, to access health-care data, such as the status of a prescription or a blood-sugar reading; it will build a 3-million-square-foot cargo airport outside Cincinnati; it will make next-day delivery standard for members of its Prime service; it will start a new chain of grocery stores, in addition to Whole Foods, which it already owns; it will stream Major League Baseball games; it will launch more than 3,000 satellites into orbit to supply the world with high-speed internet.
Bezos’s ventures are by now so large and varied that it is difficult to truly comprehend the nature of his empire, much less the end point of his ambitions. What exactly does Jeff Bezos want? Or, to put it slightly differently, what does he believe? Given his power over the world, these are not small questions. Yet he largely keeps his intentions to himself; many longtime colleagues can’t recall him ever expressing a political opinion. To replay a loop of his interviews from Amazon’s quarter century of existence is to listen to him retell the same unrevealing anecdotes over and over.
To better understand him, I spent five months speaking with current and former Amazon executives, as well as people at the company’s rivals and scholarly observers. Bezos himself declined to participate in this story, and current employees would speak to me only off the record. Even former staffers largely preferred to remain anonymous, assuming that they might eventually wish to work for a business somehow entwined with Bezos’s sprawling concerns.
From November 2018: Alexa, should we trust you? In the course of these conversations, my view of Bezos began to shift. Many of my assumptions about the man melted away; admiration jostled with continued unease. And I was left with a new sense of his endgame.
Bezos loves the word relentless —it appears again and again in his closely read annual letters to shareholders—and I had always assumed that his aim was domination for its own sake. In an era that celebrates corporate gigantism, he seemed determined to be the biggest of them all. But to say that Bezos’s ultimate goal is dominion over the planet is to misunderstand him. His ambitions are not bound by the gravitational pull of the Earth.
B efore Bezos settled on Amazon.com, he toyed with naming his unlaunched store MakeItSo.com. He entertained using the phrase because he couldn’t contain a long-standing enthusiasm. The rejected moniker was a favored utterance of a man Bezos idolizes: the captain of the starship USS Enterprise-D, Jean-Luc Picard.
Bezos is unabashed in his fanaticism for Star Trek and its many spin-offs. He has a holding company called Zefram, which honors the character who invented warp drive. He persuaded the makers of the film Star Trek Beyond to give him a cameo as a Starfleet official. He named his dog Kamala, after a woman who appears in an episode as Picard’s “perfect” but unattainable mate. As time has passed, Bezos and Picard have physically converged. Like the interstellar explorer, portrayed by Patrick Stewart, Bezos shaved the remnant strands on his high-gloss pate and acquired a cast-iron physique. A friend once said that Bezos adopted his strenuous fitness regimen in anticipation of the day that he, too, would journey to the heavens.
When reporters tracked down Bezos’s high-school girlfriend, she said, “The reason he’s earning so much money is to get to outer space.” This assessment hardly required a leap of imagination. As the valedictorian of Miami Palmetto Senior High School’s class of 1982, Bezos used his graduation speech to unfurl his vision for humanity. He dreamed aloud of the day when millions of his fellow earthlings would relocate to colonies in space. A local newspaper reported that his intention was “to get all people off the Earth and see it turned into a huge national park.” Most mortals eventually jettison teenage dreams, but Bezos remains passionately committed to his, even as he has come to control more and more of the here and now. Critics have chided him for philanthropic stinginess, at least relative to his wealth, but the thing Bezos considers his primary humanitarian contribution isn’t properly charitable. It’s a profit-seeking company called Blue Origin, dedicated to fulfilling the prophecy of his high-school graduation speech. He funds that venture—which builds rockets, rovers, and the infrastructure that permits voyage beyond the Earth’s atmosphere—by selling about $1 billion of Amazon stock each year. More than his ownership of his behemoth company or of The Washington Post —and more than the $2 billion he’s pledged to nonprofits working on homelessness and education for low-income Americans—Bezos calls Blue Origin his “most important work.” He considers the work so important because the threat it aims to counter is so grave. What worries Bezos is that in the coming generations the planet’s growing energy demands will outstrip its limited supply. The danger, he says, “is not necessarily extinction,” but stasis: “We will have to stop growing, which I think is a very bad future.” While others might fret that climate change will soon make the planet uninhabitable, the billionaire wrings his hands over the prospects of diminished growth. But the scenario he describes is indeed grim. Without enough energy to go around, rationing and starvation will ensue. Over the years, Bezos has made himself inaccessible to journalists asking questions about Amazon. But he shares his faith in space colonization with a preacher’s zeal: “We have to go to space to save Earth.” At the heart of this faith is a text Bezos read as a teen. In 1976, a Princeton physicist named Gerard K. O’Neill wrote a populist case for moving into space called The High Frontier , a book beloved by sci-fi geeks, NASA functionaries, and aging hippies. As a Princeton student, Bezos attended O’Neill seminars and ran the campus chapter of Students for the Exploration and Development of Space. Through Blue Origin, Bezos is developing detailed plans for realizing O’Neill’s vision.
The professor imagined colonies housed in miles-long cylindrical tubes floating between Earth and the moon. The tubes would sustain a simulacrum of life back on the mother planet, with soil, oxygenated air, free-flying birds, and “beaches lapped by waves.” When Bezos describes these colonies—and presents artists’ renderings of them—he sounds almost rapturous. “This is Maui on its best day, all year long. No rain, no storms, no earthquakes.” Since the colonies would allow the human population to grow without any earthly constraints, the species would flourish like never before: “We can have a trillion humans in the solar system, which means we’d have a thousand Mozarts and a thousand Einsteins. This would be an incredible civilization.” Bezos rallies the public with passionate peroration and convincing command of detail. Yet a human hole remains in his presentation. Who will govern this new world? Who will write its laws? Who will decide which earthlings are admitted into the colonies? These questions aren’t explicitly answered, except with his fervent belief that entrepreneurs, those in his own image, will shape the future. And he will do his best to make it so. With his wealth, and the megaphone that it permits him, Bezos is attempting to set the terms for the future of the species, so that his utopia can take root.
I n a way, Bezos has already created a prototype of a cylindrical tube inhabited by millions, and it’s called Amazon.com. His creation is less a company than an encompassing system. If it were merely a store that sold practically all salable goods—and delivered them within 48 hours—it would still be the most awe-inspiring creation in the history of American business. But Amazon is both that tangible company and an abstraction far more powerful.
Bezos’s enterprise upends long-held precepts about the fundamental nature of capitalism—especially an idea enshrined by the great Austrian economist Friedrich Hayek. As World War II drew to its close, Hayek wrote the essay “The Use of Knowledge in Society,” a seminal indictment of centralized planning. Hayek argued that no bureaucracy could ever match the miracle of markets, which spontaneously and efficiently aggregate the knowledge of a society. When markets collectively set a price, that price reflects the discrete bits of knowledge scattered among executives, workers, and consumers. Any governmental attempt to replace this organic apparatus—to set prices unilaterally, or even to understand the disparate workings of an economy—is pure hubris.
Amazon, however, has acquired the God’s-eye view of the economy that Hayek never imagined any single entity could hope to achieve. At any moment, its website has more than 600 million items for sale and more than 3 million vendors selling them. With its history of past purchases, it has collected the world’s most comprehensive catalog of consumer desire, which allows it to anticipate both individual and collective needs. With its logistics business—and its growing network of trucks and planes—it has an understanding of the flow of goods around the world. In other words, if Marxist revolutionaries ever seized power in the United States, they could nationalize Amazon and call it a day.
Read: Jeff Bezos’s $150 billion fortune is a policy failure What makes Amazon so fearsome to its critics isn’t purely its size but its trajectory. Amazon’s cache of knowledge gives it the capacity to build its own winning version of an astonishing array of businesses. In the face of its growth, long-dormant fears of monopoly have begun to surface—and Amazon has reportedly found itself under review by the Federal Trade Commission and the Department of Justice. But unlike Facebook, another object of government scrutiny , Bezos’s company remains deeply trusted by the public. A 2018 poll sponsored by Georgetown University and the Knight Foundation found that Amazon engendered greater confidence than virtually any other American institution. Despite Donald Trump’s jabs at Bezos, this widespread faith in the company makes for a source of bipartisan consensus, although the Democrats surveyed were a touch more enthusiastic than the Republicans were: They rated Amazon even more trustworthy than the U.S. military. In contrast to the dysfunction and cynicism that define the times, Amazon is the embodiment of competence, the rare institution that routinely works.
All of this confidence in Bezos’s company has made him a singular figure in the culture, which, at times, regards him as a flesh-and-blood Picard. If “Democracy dies in darkness”—the motto of the Bezos-era Washington Post —then he is the rescuer of the light, the hero who reversed the terminal decline of Woodward and Bernstein’s old broadsheet. When he wrote a Medium post alleging that the National Enquirer had attempted to extort him , he was hailed for taking a stand against tabloid sleaze and cyberbullying.
As Amazon has matured, it has assumed the trappings of something more than a private enterprise. It increasingly poses as a social institution tending to the common good. After it earned derision for the alleged treatment of its workers—some warehouse employees reported feeling pressured to forgo bathroom breaks to meet productivity targets, to cite just one example—it unilaterally raised its minimum wage to $15 an hour in the U.S., then attempted to shame competitors that didn’t follow suit. (Amazon says that employees are allowed to use the bathroom whenever they want.) As technology has reshaped its workforce, Amazon has set aside $700 million to retrain about a third of its U.S. employees for roles with new demands.
These gestures are partly gambits to insulate the company’s reputation from accusations of rapaciousness. But they also tie Amazon to an older conception of the corporation. In its current form, Amazon harkens back to Big Business as it emerged in the postwar years. When Charles E. Wilson, the president of General Motors, was nominated to be secretary of defense in 1953, he famously told a Senate confirmation panel, “I thought what was good for our country was good for General Motors, and vice versa.” For the most part, this was an aphorism earnestly accepted as a statement of good faith. To avert class warfare, the Goliaths of the day recognized unions; they bestowed health care and pensions upon employees. Liberal eminences such as John K. Galbraith hailed the corporation as the basis for a benign social order. Galbraith extolled the social utility of the corporation because he believed that it could be domesticated and harnessed to serve interests other than its own bottom line. He believed businesses behave beneficently when their self-serving impulses are checked by “countervailing power” in the form of organized labor and government.
Of course, those powers have receded. Unions, whose organizing efforts Amazon has routinely squashed, are an unassuming nub of their former selves; the regulatory state is badly out of practice. So while Amazon is trusted, no countervailing force has the inclination or capacity to restrain it. And while power could amass in a more villainous character than Jeff Bezos, that doesn’t alleviate the anxiety that accompanies such concentration. Amazon might be a vast corporation, with more than 600,000 employees, but it is also the extension of one brilliant, willful man with an incredible knack for bending the world to his values.
2.0 After Jackie Bezos’s shotgun marriage to a member of a traveling unicyclist troupe dissolved, she dedicated herself to their only progeny. The teenage mother from Albuquerque became her son’s intellectual champion. She would drive him 40 miles each day so that he could attend an elementary school for high-testing kids in Houston. When a wait list prevented him from entering the gifted track in middle school, she wheedled bureaucrats until they made an exception. Over the course of Bezos’s itinerant childhood, as his family traversed the Sun Belt of the ’70s, Jackie encouraged her son’s interest in tinkering by constantly shuttling him to RadioShack.
“I have always been academically smart,” Bezos told an audience in Washington, D.C., last year. This was a sentiment ratified by the world as he ascended the meritocracy. At Princeton, he flirted with becoming a theoretical physicist. On Wall Street, he joined D. E. Shaw, arguably the brainiest and most adventurous hedge fund of the ’90s. The firm would send unsolicited letters to dean’s-list students at top universities, telling them: “We approach our recruiting in unapologetically elitist fashion.” The computer scientist who founded the firm, David E. Shaw, had dabbled in the nascent internet in the ’80s. This provided him with unusual clarity about the coming revolution and its commercial implications. He anointed Bezos to seek out investment opportunities in the newly privatized medium—an exploration that led Bezos to his own big idea.
When Bezos created Amazon in 1994, he set out to build an institution like the ones that had carried him through the first three decades of his life. He would build his own aristocracy of brains, a place where intelligence would rise to the top. Early on, Bezos asked job candidates for their SAT scores. The company’s fifth employee, Nicholas Lovejoy, later told Wired that interviews would take the form of a Socratic test. Bezos would probe logical acuity with questions like Why are manhole covers round? According to Lovejoy, “One of his mottos was that every time we hired someone, he or she should raise the bar for the next hire, so that the overall talent pool was always improving.” When Bezos thought about talent, in other words, he was self-consciously in a Darwinian mode.
Read: The world wants less tech. Amazon gives it more By the logic of natural selection, it was hardly obvious that a bookstore would become the dominant firm in the digital economy. From Amazon’s infancy, Bezos mastered the art of coyly deflecting questions about where he intended to take his company. But back in his hedge-fund days, he had kicked around the idea of an “everything store” with Shaw. And he always conveyed the impression of having grand plans—a belief that the fiction aisle and the self-help section might serve as the trailhead to commanding heights.
In the vernacular, Amazon is often lumped together with Silicon Valley. At its spiritual center, however, Amazon is a retailer, not a tech company. Amazon needed to elbow its way into a tightly packed and unforgiving industry, where it faced entrenched entities such as Barnes & Noble, Walmart, and Target. In mass-market retail, the company with the thinnest margin usually prevails, and a soft December can ruin a year. Even as Bezos prided himself on his capacity for thinking far into the future, he also had to worry about the prospect of tomorrow’s collapse. At tightfisted Amazon, there were no big bonuses at year’s end, no business-class flights for executives on long hauls, no employee kitchens overflowing with protein bars.
Bezos was hardly a mellow leader, especially in the company’s early days. To mold his organization in his image, he often lashed out at those who failed to meet his high standards. The journalist Brad Stone’s indispensable book about the company, The Everything Store , contains a list of Bezos’s cutting remarks: “Are you lazy or just incompetent?” “This document was clearly written by the B team. Can someone get me the A-team document?” “Why are you ruining my life?” (Amazon says this account is not reflective of Bezos’s leadership style.) This was the sarcastic, demeaning version of his endless questioning. But Bezos’s waspish intelligence and attention to detail—his invariable focus on a footnote or an appendix—elicited admiration alongside the dread. “If you’re going in for a Bezos meeting, you’re preparing as if the world is going to end,” a former executive told me. “You’re like, I’ve been preparing for the last three weeks. I’ve asked every damn person that I know to think of questions that could be asked.
Then Bezos will ask you the one question you hadn’t considered.” The growth of the company—which already brought in nearly $3 billion in revenue in its seventh year of existence—prodded Bezos to adapt his methods. He created a new position, technical adviser, to instill his views in top managers; the technical advisers would shadow the master for at least a year, and emerge as what executives jokingly refer to as “Jeff-bots.” His managerial style, which had been highly personal, was codified in systems and procedures. These allowed him to scale his presence so that even if he wasn’t sitting in a meeting, his gestalt would be there.
In 2002, Amazon distilled Bezos’s sensibility into a set of Leadership Principles , a collection of maxims including “Invent and Simplify,” “Bias for Action,” and “Have Backbone; Disagree and Commit.” To an outside ear, these sound too hokey to be the basis for fervent belief. But Amazonians, as employees call themselves, swear by them. The principles, now 14 in number, are the subject of questions asked in job interviews; they are taught in orientations; they are the qualities on which employees are judged in performance reviews.
Of all the principles, perhaps the most sacrosanct is “Customer Obsession”—the commandment to make decisions only with an eye toward pleasing the consumer, rather than fixating on competitors—a pillar of faith illustrated by the Great Lube Scandal. About 10 years ago, Bezos became aware that Amazon was sending emails to customers suggesting the purchase of lubricants. This fact made him apoplectic. If such an email arrived at work, a boss might glimpse it. If it arrived at home, a child might pose uncomfortable questions. Bezos ordered the problem solved and threatened to shut down Amazon’s email promotions in their entirety if it wasn’t. Kristi Coulter, who served as the head of worldwide editorial and site merchandising, led a group that spent weeks compiling a list of verboten products, which Bezos’s top deputies then reviewed. She told me, “It wasn’t just, like, hemorrhoid cream, or lube, it was hair color, any kind of retinol. They were so conservative about what they thought would be embarrassing. Even tooth-whitening stuff, they were like, ‘No. That could be embarrassing.’ ” To climb Amazon’s organizational chart is to aspire to join the inner sanctum at the very peak, called the S-Team (“the senior team”). These are the 17 executives who assemble regularly with Bezos to debate the company’s weightiest decisions. Bezos treats the S-Team with familial affection; its members come closest to being able to read his mind. The group has absorbed the Bezos method and applies it to the corners of the company that he can’t possibly touch. According to James Thomson, a manager who helped build Amazon Marketplace, where anyone can sell new or used goods through the website, “At most companies, executives like to show how much they know. At Amazon, the focus is on asking the right question. Leadership is trained to poke holes in data.” Once an executive makes it to the S-Team, he remains on the S-Team. The stability of the unit undoubtedly provides Bezos a measure of comfort, but it also calcifies this uppermost echelon in an antiquated vision of diversity. The S‑Team has no African Americans; the only woman runs human resources. Nor does the composition of leadership change much a step down the ladder. When CNBC examined the 48 executives who run Amazon’s core businesses (including retail, cloud, and hardware), it found only four women.
One former team leader, who is a person of color, told me that when top executives hear the word diversity , they interpret it to mean “the lowering of standards.” “It’s this classic libertarian thinking,” Coulter told me. “They think Amazon is a meritocracy based on data, but who’s deciding what gets counted and who gets to avail themselves of the opportunity? If VP meetings are scheduled at 7 a.m., how many mothers can manage that?” (Amazon disputes the methodology CNBC used to tally women in its senior leadership ranks. “There are dozens of female executives that play a critical role in Amazon’s success,” a spokesman told me in an email. He cited the company’s generous parental-leave policy, a commitment to flexible scheduling, and the fact that more than 40 percent of its global workforce is female as evidence of its pursuit of gender equity. He also said that its Leadership Principles insist that employees “seek diverse perspectives.”) The meritocrat’s blind spot is that he considers his place in the world well earned by dint of intelligence and hard work. This belief short-circuits his capacity to truly listen to critics. When confronted about the composition of the S-Team in a company-wide meeting two years ago, Bezos seemed to dismiss the urgency of the complaint. According to CNBC, he said that he expected “any transition there to happen very incrementally over a long period of time.” The latest addition to the group , made this year, was another white male.
B ezos built his organization to be an anti-bureaucracy. To counter the tendency of groups to bloat, he instituted something called “two-pizza teams.” (Like Bezos’s other managerial innovations, this sounds like a gimmick, except that advanced engineers and economists with doctorates accept it as the organizing principle of their professional lives.) According to the theory, teams at Amazon should ideally be small enough to be fed with two pizzas.
In its warehouses, Amazon has used video games to motivate workers—the games, with names like MissionRacer, track output and pit workers against one another, prodding them to move faster. The two-pizza teams represent a more subtle, white-collar version of this gamification. The small teams instill a sense of ownership over projects. But employees placed on such small teams can also experience a greater fear of failure, because there’s no larger group in which to hide or to more widely distribute blame.
Amazon has a raft of procedures to guide its disparate teams. Bezos insists that plans be pitched in six-page memos, written in full sentences, a form he describes as “narrative.” This practice emerged from a sense that PowerPoint had become a tool for disguising fuzzy thinking. Writing, Bezos surmised, demands a more linear type of reasoning. As John Rossman, an alumnus of the company who wrote a book called Think Like Amazon , described it, “If you can’t write it out, then you’re not ready to defend it.” The six-pagers are consumed at the beginning of meetings in what Bezos has called a “study hall” atmosphere. This ensures that the audience isn’t faking its way through the meeting either. Only after the silent digestion of the memo—which can be an anxiety-inducing stretch for its authors—can the group ask questions about the document.
Most teams at Amazon are hermetic entities; required expertise is embedded in each group. Take Amazon’s robust collection of economists with doctorates. In the past several years, the company has hired more than 150 of them, which makes Amazon a far larger employer of economists than any university in the country. Tech companies such as Microsoft and Uber have also hired economists, although not as many. And while other companies have tended to keep them in centralized units, often working on forecasting or policy issues, Amazon takes a different approach. It distributes economists across a range of teams, where they can, among other things, run controlled experiments that permit scientific, and therefore effective, manipulation of consumer behavior.
Relentless might be the most Amazonian word, but Bezos also talks about the virtues of wandering. “Wandering is an essential counterbalance to efficiency,” he wrote in a letter to shareholders this year. When I spoke with workers based at Amazon’s Seattle headquarters, they said what they appreciated most about their employer was the sense of intellectual autonomy it allowed. Once they had clearly articulated a mission in an approved six-pager, they typically had wide latitude to make it happen, without having to fight through multiple layers of approval. The wandering mentality has also helped Amazon continually expand into adjacent businesses—or businesses that seem, at first, unrelated. Assisted by the ever growing consumer and supplier data it collects, and the insights into human needs and human behavior it is constantly uncovering, the company keeps finding new opportunities for growth.
Read: When Amazon went from big to unbelievably big What is Amazon, aside from a listing on Nasdaq? This is a flummoxing question. The company is named for the world’s most voluminous river, but it also has tributaries shooting out in all directions.
Retailer hardly captures the company now that it’s also a movie studio, an artificial-intelligence developer, a device manufacturer, and a web-services provider. But to describe it as a conglomerate isn’t quite right either, given that so many of its businesses are tightly integrated or eventually will be. When I posed the question to Amazonians, I got the sense that they considered the company to be a paradigm—a distinctive approach to making decisions, a set of values, the Jeff Bezos view of the world extended through some 600,000 employees. This description, of course, means that the company’s expansion has no natural boundary; no sector of the economy inherently lies beyond its core competencies.
3.0 In late 2012 , Donald Graham prepared to sell his inheritance, The Washington Post.
He wanted to hand the paper over to someone with pockets deep enough to hold steady through the next recession; he wanted someone techie enough to complete the paper’s digital transition; above all, he wanted someone who grasped the deeper meaning of stewardship. Graham came up with a shortlist of ideal owners he would pursue, including the financier David M. Rubenstein, former New York City Mayor Michael Bloomberg, eBay founder Pierre Omidyar, and Bezos.
The last of the names especially enticed Graham. That January, he had breakfast with his friend and adviser Warren Buffett, who also happened to be a shareholder in the Post.
Buffett mentioned that he considered Bezos the “best CEO in the United States”—hardly an unconventional opinion, but Graham had never heard it from Buffett before. After the breakfast, Graham set out to better understand Bezos’s ideological predilections. “I did a primitive Google search and found nothing, as close to nothing for somebody with that kind of wealth. I didn’t know what his politics were,” he told me. This blankness suggested to Graham the stuff of an ideal newspaper owner.
Graham dispatched an emissary to make the pitch. It was a polite but hardly promising conversation: Bezos didn’t rule out the possibility of bidding for the Post , but he didn’t display any palpable enthusiasm, either. The fact that he dropped the subject for several months seemed the best gauge of his interest. While Bezos ghosted Graham, Omidyar, the most enthusiastic of the bidders, continued to seek the prize.
Bezos’s past pronouncements may not have revealed partisanship, but they did suggest little appetite for stodgy institutionalism. Like so many CEOs of the era, Bezos figured himself an instrument of creative destruction, with little sympathy for the destroyed. “Even well-meaning gatekeepers slow innovation,” he wrote in his 2011 letter to shareholders. He was critiquing New York book publishers, whose power Amazon had aimed to diminish. But he harbored a similarly dim view of self-satisfied old-media institutions that attempted to preserve their cultural authority.
It therefore came as a surprise when, after months of silence, Bezos sent a three-sentence email expressing interest in the Post.
Graham made plans to lunch with Bezos in Sun Valley, Idaho, where they would both be attending Allen & Company’s summer conference. Over sandwiches that Graham brought back to his rental, the old proprietor made his preferred buyer a counterintuitive pitch: He explained all the reasons owning a newspaper was hard. He wanted Bezos to know that a newspaper was a self-defeating vehicle for promoting business interests or any preferred agenda. The conversation was a tutorial in the responsibilities of the elite, from a distinguished practitioner.
Graham didn’t need to plead with Bezos. In Sun Valley, they hardly haggled over terms. “We had brunch twice, and at the end we shook hands, unlike almost any deal I’ve ever made in business,” Graham told me. The man who decried gatekeepers was suddenly the keeper of one of the nation’s most important gates.
Buying the Post was not a financially momentous event in the life of Jeff Bezos. In addition to the billions in Amazon stock he owned, he had quietly invested in Google and Uber in their infancy. The Bezos imprimatur, the young companies had understood, would burnish their chances with any other would-be investor. (Uber’s initial public offering alone earned him an estimated $400 million earlier this year, far more than he paid for the Post in 2013.) But the purchase was a turning point in Bezos’s reputational history—and realigned his sense of place in the world. On the eve of the acquisition, Amazon’s relationship with New York publishing was contentious. The friendly guy who professed his love of Kazuo Ishiguro novels and had created a cool new way to buy books was now seen in some quarters as an enemy of literary culture and a successor to the monopolist Rockefeller. Not long before the acquisition, he had written a memo, obtained by Brad Stone, titled “Amazon.love,” asking the S-Team to ponder how the company could avoid becoming as feared as Walmart, Goldman Sachs, and Microsoft. Although he never justified the purchase of the Post as a response to his anxieties about Amazon’s image—and, of course, his own—the question must have been on his mind as he considered the opportunity. To save a civically minded institution like the Post was a chance to stake a different legacy for himself.
Read: I delivered packages for Amazon and it was a nightmare Bezos keeps the Post structurally separate from Amazon—his family office monitors the business of the paper—but he runs it in the same expansionist spirit as he does his company. He vowed to put every dollar of profit back into the enterprise. In the six years of his ownership, the Post newsroom has grown from 500 to just over 850.
Despite his investments in the institution, Bezos’s transition to Washington, D.C., was halting and awkward. It took him several months to visit the Post newsroom and try to allay rank-and-file nervousness about the intentions of the new owner. When the Post ’s great editor Ben Bradlee died several months into his regime, he decided to attend the funeral only after Bob Woodward explained its spiritual significance. His attachment to the paper didn’t seem to acquire emotional depth until he sent his jet to retrieve the reporter Jason Rezaian from Iran, where he’d been imprisoned for 18 months, and personally accompanied him home. The press hailed Bezos for displaying such a strong interest in the fate of his reporter, a taste of how media extol those they regard as their own saviors.
It may have taken him a moment to realize that Washington would be a new center of his life, but once he did, he rushed to implant himself there. In 2016, he paid $23 million to buy the site of a former museum just down the block from Woodrow Wilson’s old home. The museum had joined together two mansions, one of which had been designed by John Russell Pope, the architect of the Thomas Jefferson Memorial. Bezos kept one of the buildings as his residential quarters and set about renovating the other for the sake of socializing, a space that seemed to self-consciously recall Katharine Graham’s old salon, except with geothermal heat.
Washingtonian magazine, which obtained Bezos’s blueprints, predicted that, once complete, it will become “a veritable Death Star of Washington entertaining.” W hile Bezos made himself at home in Washington, so did his company, but on its own terms. The Obama years were a boom time for Big Tech. Executives regularly shuffled through the White House. Visitor logs record that no American company visited more often than Google. Silicon Valley hurled itself into policy debates with its characteristic pretense of idealism, even as it began to hire Brioni-clad influence peddlers. It was, by its own account, battling for nothing less than the future of the free internet, a fight to preserve net neutrality and prevent greedy telecoms from choking the liberatory promise of the new medium.
As the tech companies invested heavily in policy, Amazon would occasionally cheer them on and join their coalitions. But mostly it struck a pose of indifference. Amazon didn’t spend as much on lobbyists as most of its Big Tech brethren did, at least not until the late Obama years. Amazon seemed less concerned about setting policy than securing lucrative contracts. It approached government as another customer to be obsessed over.
Given the way Democrats now bludgeon Big Tech , it’s hard to remember how warmly Barack Obama embraced the industry, and how kindly Big Tech reciprocated with campaign donations. But there was a less visible reason for the alliance: As the debacle of healthcare.gov graphically illustrated, Obama badly needed a geek squad. He installed the nation’s first-ever chief technology officer, and the administration began to importune the federal bureaucracy to upload itself to the cloud, a move it promised would save money and more effectively secure sensitive material.
Cloud First was the official name of the policy. Amazon had nothing to do with its inception, but it stood to make billions from it. It had wandered into the cloud-computing business long before its rivals. Amazon Web Services is, at its most elemental, a constellation of server farms around the world, which it rents at low cost as highly secure receptacles for data. Apple, the messaging platform Slack, and scores of start-ups all reside on AWS.
If retail was a maddeningly low-margin business, AWS was closer to pure profit. And Amazon had the field to itself. “We faced no like-minded competition for seven years. It’s unbelievable,” Bezos boasted last year. AWS is such a dominant player that even Amazon’s competitors, including Netflix, house data with it—although Walmart resolutely refuses, citing anxieties about placing its precious secrets on its competitor’s servers. Walmart is more suspicious than the intelligence community: In 2013, the CIA agreed to spend $600 million to place its data in Amazon’s cloud.
Other Big Tech companies have fretted about the morality of becoming entangled with the national-security state. But Bezos has never expressed such reservations. His grandfather developed missile-defense systems for the Pentagon and supervised nuclear labs. Bezos grew up steeped in the romance of the Space Age, a time when Big Business and Big Government linked arms to achieve great national goals. Besides, to be trusted with the secrets of America’s most secretive agency gave Amazon a talking point that it could take into any sales pitch—the credentials that would recommend it to any other government buyer.
One of Amazon’s great strengths is its capacity to learn, and it eventually acclimated itself to the older byways of Washington clientelism, adding three former congressmen to its roster of lobbyists. (Amazon’s spending on lobbying has increased by almost 470 percent since 2012.) It also began to hire officials as they stepped out of their agencies. When the Obama administration’s top procurement officer, Anne Rung, left her post, she headed straight to Amazon.
The goal wasn’t just to win cloud-computing contracts. Amazon sold facial-recognition software to law-enforcement agencies and has reportedly pitched it to Immigration and Customs Enforcement. Amazon also wanted to become the portal through which government bureaus buy staples, chairs, coffee beans, and electronic devices. This wasn’t a trivial slice of business; the U.S. government spends more than $50 billion on consumer goods each year. In 2017, the House of Representatives quietly passed the so-called Amazon amendment, buried within a larger appropriations bill. The provisions claimed to modernize government procurement, but also seemed to set the terms for Amazon’s dominance of this business. Only after competitors grasped the significance of the amendment did a backlash slow the rush toward Amazon. (The government is preparing to run a pilot program testing a few different vendors.) Still, government’s trajectory was easy to see, especially if one looked outside the capital city. In 2017, Amazon signed an agreement with a little-known organization called U.S. Communities, with the potential to yield an estimated $5.5 billion. U.S. Communities negotiates on behalf of more than 55,000 county and municipal entities (school districts, library systems, police departments) to buy chalk, electronics, books, and the like. A 2018 report by the Institute for Local Self-Reliance documented how a growing share of the physical items that populate public spaces has come to be supplied by Amazon.
At the heart of Amazon’s growing relationship with government is a choking irony. Last year, Amazon didn’t pay a cent of federal tax. The company has mastered the art of avoidance, by exploiting foreign tax havens and moonwalking through the seemingly infinite loopholes that accountants dream up. Amazon may not contribute to the national coffers, but public funds pour into its own bank accounts. Amazon has grown enormous, in part, by shirking tax responsibility. The government rewards this failure with massive contracts, which will make the company even bigger.
W hat type of ego does Jeff Bezos possess? The president of the United States has tested his capacity for sublimation by pummeling him mercilessly. In Trump’s populist morality play, “Jeff Bozo” is cast as an overlord. He crushes small businesses; he rips off the postal service; he stealthily advances corporate goals through his newspaper, which Trump misleadingly refers to as the “Amazon Washington Post.
” During the 2016 campaign, Trump vowed to use the machinery of state to flay Amazon: “If I become president, oh do they have problems.” Don Graham’s warnings about the downsides of newspaper ownership suddenly looked prophetic.
It’s not that Bezos has always whistled past these attacks: In a countertweet, he once joked about launching Donald Trump into space. However, the nature of Bezos’s business, with both government and red-state consumers, means that he would rather avoid presidential hostility.
Despite the vitriol, or perhaps because of it, Amazon hired the lobbyist Jeff Miller, a prodigious Trump fundraiser; Bezos conveys his opinions to the president’s son-in-law, Jared Kushner. In 2017, Bezos won a nomination to join a panel advising the Defense Department on technology, although the swearing-in was canceled after Pentagon officials realized that he had not undergone a background check. (He never joined the panel.) One former White House aide told me, “If Trump knew how much communication Bezos has had with officials in the West Wing, he would lose his mind.” In the fall of 2017, the Pentagon announced a project called the Joint Enterprise Defense Infrastructure, or JEDI. The project would migrate the Defense Department’s data to a centralized cloud, so that the agency could make better use of artificial intelligence and more easily communicate across distant battlefields. The Pentagon signaled the importance of the venture with the amount it intended to spend on it: $10 billion over 10 years. But it has the potential to be even more lucrative, since the rest of the federal government tends to follow the Pentagon’s technological lead.
Firms vied ferociously to win the contract. Because Amazon was widely seen as the front-runner, it found itself on the receiving end of most of the slings. Its rivals attempted to stoke Trump’s disdain for Bezos. An executive at the technology company Oracle created a flowchart purporting to illustrate Amazon’s efforts, titled “ A Conspiracy to Create a Ten Year DoD Cloud Monopoly.
” Oracle has denied slipping the graphic to the president, but a copy landed in Trump’s hands.
Oracle also tried to block Amazon in court. Its filings spun a sinister narrative of Amazon infiltrating the Pentagon. A former consultant for Amazon Web Services had landed a top job in the secretary of defense’s office, but at the heart of Oracle’s tale was a project manager who had arrived at the Pentagon by way of Amazon named Deap Ubhi. Even as he worked in government, Ubhi tweeted: “Once an Amazonian, always an Amazonian.” Oracle alleged that he stayed true to that self-description as he helped shape JEDI to favor his alma mater. (Amazon countered that dozens of people developed the contract, and that Ubhi worked on JEDI for only seven weeks, in its early stages.) When the Pentagon formally announced JEDI’s specifications, only Amazon and Microsoft met them.
Ubhi’s role in the project was concerning, but not enough for either a federal judge or the Pentagon to halt JEDI. There was “smoke,” the judge said, but no “fire.” This victory should have paved the way for Amazon. But with the Pentagon nearly set to award JEDI this summer, the president’s new secretary of defense, Mark Esper, announced that he was delaying the decision and reexamining the contract. A Pentagon official told me that Trump had seen Tucker Carlson inveigh against JEDI on Fox News and asked for an explanation. Senator Marco Rubio, who received more than $5 million in campaign contributions from Oracle during the 2016 campaign cycle, called for the Pentagon to delay awarding the bid, and reportedly pressed the case in a phone call with Trump. (Rubio received a much smaller donation from Amazon in the same period.) Trump seems to have been unable to resist a chance to stick it to his enemy, perhaps mortally imperiling Amazon’s chance to add $10 billion to its bottom line.
Given Trump’s motives, it’s hard not to sympathize with Bezos. But Trump’s spite—and the terrible precedent set by his punishment of a newspaper owner—doesn’t invalidate the questions asked of Amazon. Its critics have argued that government shouldn’t latch itself onto a single company, especially not with a project this important. They noted that storing all of the Pentagon’s secrets with one provider could make them more vulnerable to bad actors. It could also create an unhealthy dependence on a firm that might grow complacent with its assured stream of revenue and lose its innovative edge over time.
JEDI sits within the context of larger questions about the government’s relationship to Amazon. Fears that the public was underwriting the company’s continued growth haunted Amazon’s attempt to build a second headquarters in Queens—New York government looked like it was providing tax breaks and subsidies to the business that least needs a boost.
While Amazon’s aborted move to Long Island City attracted all the attention, the building of a similar bastion just outside Washington, D.C., is more ominous. Of course, there are plenty of honorable reasons for a company to set up shop in the prosperous shadow of the Capitol. But it’s hard to imagine that Amazon wasn’t also thinking about its budding business with the government—an opportunity that the delay of JEDI will hardly dissuade it from pursuing. According to a Government Accountability Office survey of 16 agencies, only 11 percent of the federal government has made the transition to the cloud.
The company is following in its owner’s tracks. Just as Bezos has folded himself into the fraternity of Washington power—yukking it up at the Alfalfa and Gridiron Clubs—thousands of Amazon implants will be absorbed by Washington. Executives will send their kids to the same fancy schools as journalists, think-tank fellows, and high-ranking government officials. Amazonians will accept dinner-party invites from new neighbors. The establishment, plenty capacious, will assimilate millionaire migrants from the other Washington. Amazon’s market power will be matched by political power; the interests of the state and the interests of one enormous corporation will further jumble—the sort of combination that has, in the past, never worked out well for democracy.
4.0 Jeff Bezos was with his people, the feted guest at the 2018 meeting of the National Space Society. The group awarded him a prize it could be sure he would appreciate: the Gerard K. O’Neill Memorial Award for Space Settlement Advocacy. After a dinner in his honor, Bezos sat onstage to chat with an editor from GeekWire.
But before the discussion could begin, Bezos interjected a question: “Does anybody here in this audience watch a TV show called The Expanse ?” The question pandered to the crowd, eliciting applause, hoots, and whistles.
The Expanse , which had been broadcast on the Syfy channel, is about the existential struggles of a space colony, set in the far future, based on novels that Bezos adores. Despite the militancy of its devoted fans, Syfy had canceled The Expanse.
Angry protests had ensued. A plane had flown over an Amazon office in Santa Monica, California, with a banner urging the company to pick up the show.
As the Space Society’s exuberant reaction to Bezos’s first question began to wane, Bezos juiced the crowd with another: “Do you guys know that the cast of The Expanse is here in the room?” He asked the actors to stand. From his years overseeing a movie studio, Bezos has come to understand the dramatic value of pausing for a beat. “Ten minutes ago,” he told the room, “I just got word that The Expanse is saved.” And, in fact, he was its benefactor. Invoking the name of the spaceship at the center of the series, he allowed himself to savor the fist-pumping euphoria that surrounded him. “The Rocinante is safe.” The Expanse was one small addition to Bezos’s Hollywood empire, which will soon be housed in the old Culver Studios, where Hitchcock once filmed Rebecca and Scorsese shot Raging Bull.
Amazon will spend an estimated $5 billion to $6 billion on TV shows and movies this year.
When Bezos first announced Amazon’s arrival in Hollywood, he bluntly stated his revolutionary intent. He vowed to create “a completely new way of making movies,” as he put it to Wired.
Amazon set up a page so that anyone, no matter their experience, could submit scripts for consideration. It promised that it would let data drive the projects it commissioned—some in the company liked to describe this as the marriage of “art and science.” This bluster about Amazon’s heterodox approach turned out to be unreflective of the course it would chart. When it streamed its second batch of pilots, in 2014, it analyzed viewing patterns, then set aside the evidence. Bezos walked into the green-light meeting and announced that Amazon needed to press forward with the least-watched of the five pilots: Transparent , a show about a transgender parent of three adult children. Bezos had read the rave reviews and made up his mind.
The critical success of Transparent set the template for Amazon Studios. In the early 2010s, the best talent still preferred to work for cable networks. For a new platform to pry that talent away and attract viewers, it needed to generate attention, to schedule a noisy slate. Instead of playing to the masses, Amazon defined itself as an indie studio, catering to urban upper-middle-class tastes, although the executives in Seattle were hardly hipsters themselves. One former executive from Amazon’s book-publishing arm told me, “I remember when Lena Dunham’s proposal was going out, they were like, ‘Who is Lena Dunham?’ ” As a nascent venture, Amazon Studios was forced to hew closely to one of Amazon’s Leadership Principles: Frugality. Executives rummaged through other companies’ rejection piles for unconventional scripts. It bought Catastrophe , a cast-aside comedy, for $100,000 an episode. With the BBC, it acquired the first season of Fleabag for about $3 million.
Parsimony proved to be a creative stimulant. The studio’s risky projects were awards magnets. Amazon won Golden Globes in all five years it was in contention. When the camera panned for black-tie reaction shots to these victories, the glare of Bezos’s unmistakable scalp would jump off the screen. According to his colleagues, these awards provided him with palpable pleasure, and he thrust himself into their pursuit. To curry favor with those who cast ballots for big prizes, he hosted parties at his Beverly Hills property, which had once been owned by DreamWorks co-founder David Geffen.
Reading interviews with Bezos from back in the days of his rapid ascent, it’s hard to believe that he ever imagined becoming a king of Hollywood or that leading men like Matt Damon would drape their arms over his shoulders and pose for photographs as if they were chums. When he talked about his own nerdiness, he was self-effacing, sometimes painfully so. He once told Playboy , “I am not the kind of person women fall in love with. I sort of grow on them, like a fungus.” When Bezos attended the 2013 Vanity Fair Oscars party, he didn’t act as if he owned the room. Still, while Google co-founder Sergey Brin kept to a corner, Bezos and his now ex-wife, MacKenzie, circulated through the throngs. They might have clung to each other, but they also gamely engaged whoever approached them. MacKenzie once admitted to Vogue that her introversion made her nervous at such events, but she described her husband as a “very social guy.” Hollywood, both the business and the scene, is an intoxicant. Just as in Washington, Bezos immersed himself in a new culture. Paparazzi captured him yachting with the media mogul Barry Diller. He got to know the powerful agent Patrick Whitesell, whose wife, Lauren Sanchez, would later become Bezos’s girlfriend. He began to appear at the parties of famous producers, such as Mark Burnett, the creator of Survivor and The Apprentice.
As one Hollywood executive told me, “Bezos is always showing up. He would go to the opening of an envelope.” B ezos has justified Amazon’s investment in Hollywood with a quip: “When we win a Golden Globe, it helps us sell more shoes.” This is an intentionally glib way of saying that Amazon is different from its competitors. It’s not just a streaming service (like Netflix) or a constellation of channels (like Comcast), although it’s both of those things. Amazon is an enclosed ecosystem, and it hopes that its video offerings will prove a relatively inexpensive method of convincing people to live within it.
Amazon’s goal is visible in one of the metrics that it uses to judge the success of its programming. It examines the viewing habits of users who sign up for free trials of Amazon Prime , and then calculates how many new subscriptions to the service a piece of programming generates. As it deliberates over a show’s fate, Amazon considers a program’s production costs relative to the new subscriptions it yields. In the earliest days of the studio, nice reviews might have been enough to overcome these analytics. But Amazon has demonstrated that it will cancel even a Golden Globe winner, such as I Love Dick , if the metrics suggest that fate.
Back in the ’60s, countercultural critiques of television regarded it as a form of narcotic that induced a state of mindless consumerism. That’s not an unfair description of television’s role in Prime’s subscription model. Despite its own hyperrational approach to the world, Amazon wants to short-circuit the economic decision making of its consumers. Sunil Gupta, a Harvard Business School professor who has studied the company, told me, “When Amazon started Prime, it cost $79 and the benefit was two-day free shipping. Now, most smart people will do the math and they will ask, Is $79 worth it? But Bezos says, I don’t want you to do this math. So I’ll throw in movies and other benefits that make the computation of value difficult.
” When Amazon first created Prime, in 2005, Bezos insisted that the price be set high enough that the program felt like a genuine commitment. Consumers would then set out to redeem this sizable outlay by faithfully consuming through Amazon. One hundred million Prime subscribers later, this turned out to be a masterstroke of behavioral economics. Prime members in the U.S. spend $1,400 a year on Amazon purchases, compared with $600 by nonmembers, according to a survey by Consumer Intelligence Research Partners. It found that 93 percent of Prime customers keep their subscription after the first year; 98 percent keep it after the second. Through Prime, Bezos provided himself a deep pool of cash: When subscriptions auto-renew each year, the company instantly has billions in its pockets. Bezos has turned his site into an almost unthinking habit.
The Marvelous Mrs.
Maisel and Jack Ryan are essential tools for patterning your existence.
As Bezos has deepened his involvement in the studio, it has begun to make bigger bets that reflect his sensibility. It spent $250 million to acquire the rights to produce a Lord of the Rings TV series. It reportedly paid nine figures for the services of the husband-and-wife team behind HBO’s Westworld and has plans to adapt novels by such sci-fi eminences as Neal Stephenson and William Gibson. Bezos has involved himself in wrangling some of these projects. He made personal pleas to J. R. R. Tolkien’s estate as the Lord of the Rings deal hung in the balance. An agent told me that Bezos has emailed two of his clients directly; Amazon executives apply pressure by invoking his name in calls: He’s asking about this project every day.
Read: Why Amazon just spent a fortune to turn ‘ Lord of the Rings’ into TV As a kid, Bezos would spend summers at his grandfather’s ranch in Cotulla, Texas, where he would help castrate bulls and install pipes. He would also watch soap operas with his grandmother. But his primary entertainment during those long days was science fiction. A fanatic of the genre had donated a robust collection to the local library, and Bezos tore his way through shelves of Isaac Asimov and Jules Verne. Describing his affinity for the novels of the sci-fi writer Iain M. Banks, he once said, “There’s a utopian element to it that I find very attractive.” The comment contains a flash of self-awareness. For all his technocratic instincts, for all his training as an engineer and a hedge-fund quant, a romantic impulse coexists with his rationalism, and sometimes overrides it.
It is perhaps fitting that Bezos’s lone brush with scandal transpired in Hollywood. What befuddled so many of his admirers is that the scandal revealed a streak of indiscipline that doesn’t mesh with the man who created a company so resolutely fixated on the long term, so committed to living its values. The expectation embedded in this confusion is unfair. While the culture has sometimes touted Bezos as a superhero, he’s an earthling in the end. When he creates the terms for his business, or for society, he’s no more capable of dispassion than anyone else. To live in the world of Bezos’s creation is to live in a world of his biases and predilections.
5.0 I’m loath to look back at my Amazon purchase history, decades long and filled with items of questionable necessity. The recycling bin outside my house, stuffed full of cardboard covered with arrows bent into smiles, tells enough of a story. I sometimes imagine that the smile represents the company having a good laugh at me. My fidelity to Amazon comes despite my record of criticizing it.
When we depend on Amazon, Amazon gains leverage over us. To sell through the site is to be subjected to a system of discipline and punishment. Amazon effectively dictates the number of items that a seller can place in a box, and the size of the boxes it will handle. (To adhere to Amazon’s stringent requirements, a pet-food company recently reduced its packaging by 34 percent.) Failure to comply with the rules results in a monetary fine. If a company that sells through Amazon Marketplace feels wronged, it has little recourse, because its contract relinquishes the right to sue. These are just the terms of service.
Is there even a choice about Amazon anymore? This is a question that haunts businesses far more than consumers. Companies such as Nike resisted Amazon for years; they poured money into setting up their own e-commerce sites. But even when Nike didn’t sell its products on Amazon, more Nike apparel was sold on the site than any other brand. Anyone could peddle Nike shoes on Amazon without having to explain how they obtained their inventory. Because Amazon Marketplace had become a pipeline connecting Chinese factories directly to American homes, it also served as a conduit for counterfeit goods, a constant gripe of Nike’s.
Wired reported that, at one point during this year’s Women’s World Cup, six of Amazon’s 10 best-selling jerseys appeared to be knockoffs. To have any hope of controlling this market, Nike concluded that it had no option but to join its rival. (Amazon has said that it prohibits the sale of counterfeit products.) Ben Thompson, the founder of Stratechery, a website that vivisects Silicon Valley companies, has incisively described Amazon’s master plan. He argues that the company wants to provide logistics “for basically everyone and everything,” because if everything flows through Amazon, the company will be positioned to collect a “tax” on a stunning array of transactions. When Amazon sells subscriptions to premium cable channels such as Showtime and Starz, it reportedly takes anywhere from a 15 to 50 percent cut. While an item sits in an Amazon warehouse waiting to be purchased, the seller pays a rental fee. Amazon allows vendors to buy superior placement in its search results (it then marks those results as sponsored), and it has carved up the space on its own pages so that they can be leased as advertising. If a business hopes to gain access to Amazon’s economies of scale, it has to pay the tolls. The man who styles himself as the heroic Jean-Luc Picard has thus built a business that better resembles Picard’s archenemy, the Borg, a society-swallowing entity that informs victims, You will be assimilated and Resistance is futile.
In the end, all that is admirable and fearsome about Amazon converges. Every item can be found on its site, which makes it the greatest shopping experience ever conceived. Every item can be found on its site, which means market power is dangerously concentrated in one company. Amazon’s smart speakers have the magical power to translate the spoken word into electronic action; Amazon’s doorbell cameras have the capacity to send video to the police, expanding the surveillance state. With its unique management structure and crystalline articulation of values and comprehensive collection of data, Amazon effortlessly scales into new businesses, a reason to marvel and cower. Jeff Bezos has won capitalism. The question for the democracy is, are we okay with that? O n Jeff Bezos’s ranch in West Texas, there is a mountain. Burrowed inside its hollowed-out core is a cascading tower of interlaced Geneva wheels, levers, and a bimetallic spring. These innards, still not fully assembled, will move the Clock of the Long Now, a timepiece that has been designed to run with perfect accuracy for 10,000 years, with a hand that advances with each turn of the century. Bezos has supplied $42 million to fund the clock’s construction, an attempt to dislodge humans from the present moment, to extend the species’ sense of time. Bezos has argued that if humans “think long term, we can accomplish things that we wouldn’t otherwise accomplish.” Recommended Reading The Amazon Mystery: What America's Strangest Tech Company Is Really Up To Derek Thompson Paul Manafort, American Hustler Franklin Foer The Psychological Benefits of Commuting to Work Jerry Useem Performance reviews at Amazon ask employees to name their “superpower.” An employer probably shouldn’t create the expectation that its staff members possess qualities that extend beyond mortal reach, but I’m guessing Bezos would answer by pointing to his ability to think into the future. He dwells on the details without sacrificing his clarity about the ultimate destination. It’s why he can simultaneously prod one company to master the grocery business while he pushes another to send astronauts to the moon by 2024, in the hope that humans will eventually mine the astronomical body for the resources needed to sustain colonies. Bezos has no hope of ever visiting one of these colonies, which wouldn’t arise until long after his death, but that fact does nothing to diminish the intensity of his efforts.
Read: Jeff Bezos has plans to extract the moon’s water That Donald Trump has picked Jeff Bezos as a foil is fitting. They represent dueling reactions to the dysfunction of so much of American life. In the face of the manipulative emotionalism of this presidency, it’s hard not to pine for a technocratic alternative, to yearn for a utopia of competence and rules. As Trump runs down the country, Bezos builds things that function as promised.
Yet the erosion of democracy comes in different forms. Untrammeled private power might not seem the biggest threat when public power takes such abusive form. But the country needs to think like Bezos and consider the longer sweep of history before permitting so much responsibility to pool in one man, who, without ever receiving a vote, assumes roles once reserved for the state. His company has become the shared national infrastructure; it shapes the future of the workplace with its robots; it will populate the skies with its drones; its website determines which industries thrive and which fall to the side. His investments in space travel may remake the heavens. The incapacity of the political system to ponder the problem of his power, let alone check it, guarantees his Long Now. He is fixated on the distance because he knows it belongs to him.
" |
289 | 2,019 | "Why Is the World So Loud? - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2019/11/the-end-of-silence/598366" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore The Tech Issue: Jeff Bezos’s master plan, when GoFundMe gets ugly, and why the world is getting louder. Plus Mark Bowden on what military generals think of Trump, Jack Goldsmith’s family and government surveillance, Sandra Boynton, baseball cards, why you never see your friends, and more.
Jeff Bezos’s Master Plan Franklin Foer Top Military Officers Unload on Trump Mark Bowden Why Everything Is Getting Louder Bianca Bosker When GoFundMe Gets Ugly Rachel Monroe My Family Story of Love, the Mob, and Government Surveillance Jack Goldsmith Why You Never See Your Friends Anymore Judith Shulevitz A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
Why Everything Is Getting Louder The tech industry is producing a rising din. Our bodies can’t adapt.
K arthic Thallikar first noticed the noise sometime in late 2014, back when he still enjoyed taking walks around his neighborhood.
He’d been living with his wife and two kids in the Brittany Heights subdivision in Chandler, Arizona, for two years by then, in a taupe two-story house that Thallikar had fallen in love with on his first visit. The double-height ceilings made it seem airy and expansive; there was a playground around the corner; and the neighbors were friendly, educated people who worked in auto finance or at Intel or at the local high school. Thallikar loved that he could stand in the driveway, look out past a hayfield and the desert scrub of Gila River Indian land, and see the jagged pink outlines of the Estrella Mountains. Until recently, the area around Brittany Heights had been mostly farmland, and there remained a patchwork of alfalfa fields alongside open ranges scruffy with mesquite and coyotes.
In the evenings, after work, Thallikar liked to decompress by taking long walks around Brittany Heights, following Musket Way to Carriage Lane to Marlin Drive almost as far as the San Palacio and Clemente Ranch housing developments. It was during one of these strolls that Thallikar first became aware of a low, monotone hum, like a blender whirring somewhere in the distance. It was irritating, but he wrote it off. Someone’s pool pump, probably. On another walk a few days later, he heard it again.
A carpet-cleaning machine? he wondered. A few nights later, there it was again. It sounded a bit like warped music from some far-off party, but there was no thump or rhythm to the sound. Just one single, persistent note: EHHNNNNNNNN.
Evening after evening, he realized, the sound was there—every night, on every street. The whine became a constant, annoying soundtrack to his walks.
And then it spread. In early 2015, Thallikar discovered that the hum had followed him home. This being Arizona, Thallikar and his neighbors rewarded themselves for surviving the punishing summers by spending mild winter evenings outside: grilling, reading, napping around plunge pools, dining under the twinkle of string lights. Thallikar had installed a firepit and Adirondack chairs in his backyard. But whenever he went out to cook or read, there was that damn whine—on the weekends, in the afternoon, late into the night. It was aggravating, and he felt mounting anxiety every day it continued. Where was it coming from? Would it stop? Would it get worse? He started spending more time inside.
Then it was in his bedroom. He had just closed his eyes to go to sleep one night when he heard it: EHHNNNNNNNN.
He got up to shut the window, but that made no difference at all. “That was when I started getting concerned,” he observed later. He tried sleeping with earplugs. When that didn’t help, he also tied a towel around his head. When that still wasn’t enough, he moved into the guest room, where the hum seemed slightly fainter. Each night, he’d will himself to sleep, ears plugged and head bandaged, but he could feel the whine in his bones, feel himself getting panicky as it droned on and on and on and on and on. The noise hummed 24 hours a day, seven days a week, like a mosquito buzzing in his ear, only louder and more persistent. He sensed it coming from everywhere at once. Thallikar began to dread going home. As the months passed, he felt like he was in a war zone. He wrote in a text message that he felt as though someone was launching “an acoustic attack” on his home.
From April 2019: James Fallows on leaf blowers and activism The earliest noise complaint in history also concerns a bad night’s sleep. The 4,000-year-old Epic of Gilgamesh recounts how one of the gods, unable to sleep through humanity’s racket and presumably a little cranky, opts “to exterminate mankind.” Noise—or what the professionals call a “very dynamic acoustic environment”—can still provoke people to murderous extremes, especially when the emitter disturbs the receiver at home. After repeated attempts to quiet his raucous neighbor, a Fort Worth, Texas, father of two, perturbed by loud music at 2 a.m., called the police, who came, left, and returned less than an hour later, after the man had allegedly shot his neighbor three times—an incident not to be confused with the time a Houston man interrupted his neighbor’s late-night party and, after a showdown over noise, shot and killed the host. In New York City, a former tour-bus driver fed up with noisy parties across the hall allegedly sought help from a hit man. A man in Pennsylvania, said to have had no more trouble with the law than a traffic ticket, ambushed an upstairs couple with whom he’d had noise disputes, shooting them and then himself, and leaving behind a sticky note that read, “Can only be provoked so long before exploding.” There’s the man accused of threatening his noisy neighbors with a gun, the man who shot a middle-school coach after they quarreled over noise, the man who fired on a mother and daughter after griping about sounds from their apartment, the man who killed his roommate after a futile request that he “quiet down,” and the woman who shot at a neighbor after being asked to turn down her music—all since the beginning of this year.
Noise is never just about sound; it is inseparable from issues of power and powerlessness. It is a violation we can’t control and to which, because of our anatomy, we cannot close ourselves off. “We have all thought of killing our neighbors at some point,” a soft-spoken scientist researching noise abatement told me.
As environmental hazards go, noise gets low billing. There is no Michael Pollan of sound; limiting your noise intake has none of the cachet of going paleo or doing a cleanse. When The New Yorker recently proposed noise pollution as the next public-health crisis , the internet scoffed. “Pollution pollution is the next big (and current) public health crisis,” chided one commenter. Noise is treated less as a health risk than an aesthetic nuisance—a cause for people who, in between rounds of golf and art openings, fuss over the leaf blowers outside their vacation homes. Complaining about noise elicits eye rolls. Nothing will get you labeled a crank faster.
Scientists have known for decades that noise—even at the seemingly innocuous volume of car traffic—is bad for us. “Calling noise a nuisance is like calling smog an inconvenience,” former U.S. Surgeon General William Stewart said in 1978. In the years since, numerous studies have only underscored his assertion that noise “must be considered a hazard to the health of people everywhere.” Say you’re trying to fall asleep. You may think you’ve tuned out the grumble of trucks downshifting outside, but your body has not: Your adrenal glands are pumping stress hormones, your blood pressure and heart rate are rising, your digestion is slowing down. Your brain continues to process sounds while you snooze, and your blood pressure spikes in response to clatter as low as 33 decibels—slightly louder than a purring cat.
Experts say your body does not adapt to noise. Large-scale studies show that if the din keeps up—over days, months, years—noise exposure increases your risk of high blood pressure, coronary heart disease, and heart attacks, as well as strokes, diabetes, dementia, and depression. Children suffer not only physically—18 months after a new airport opened in Munich, the blood pressure and stress-hormone levels of neighboring children soared—but also behaviorally and cognitively. A landmark study published in 1975 found that the reading scores of sixth graders whose classroom faced a clattering subway track lagged nearly a year behind those of students in quieter classrooms—a difference that disappeared once soundproofing materials were installed. Noise might also make us mean: A 1969 study suggested that test subjects exposed to noise, even the gentle fuzz of white noise, become more aggressive and more eager to zap fellow subjects with electric shocks.
In the extreme, sound becomes a weapon. Since at least the 1960s, scientists have investigated sound’s potential to subdue hostage-takers, protesters, and enemy troops, against whom one expert proposed using low-frequency sound, because it apparently induces “disorientation, vomiting fits, bowel spasms, uncontrollable defecation.” The U.S. military, keenly aware of noise’s power to confuse and annoy, has wielded soundtracks as punishment: It tried to hurry along the Panamanian dictator Manuel Noriega’s surrender by blasting his hideout with rock music (Kiss and Rick Astley made the playlist); attacked Fallujah, Iraq, while pounding heavy metal on the battlefield (Guns N’ Roses, AC/DC); tortured Guantánamo detainees with a nonstop barrage of rap and theme songs (Eminem, the Meow Mix jingle); and, under the supervision of the FBI, attempted to aggravate the Branch Davidian cult of Waco, Texas, into surrender with a constant loop of Christmas carols, Nancy Sinatra, Tibetan chants, and dying rabbits. (“If they go Barry Manilow,” said a hostage negotiator at the time, “it’s excessive force.”) Even when not intentionally deployed for harm, the sound of drilling, barking, building, crying, singing, clomping, dancing, piano practicing, lawn mowing, and generator running becomes, to those exposed, a source of severe anguish that is entirely at odds with our cavalier attitude toward noise. “It feels like it’s eating at your body,” a man plagued by a rattling boiler told a reporter. A woman who was being accosted on all sides by incessant honking told me, “The noise had literally pushed me to a level of feeling suicidal.” For those grappling with it, noise is “chaos,” “torture,” “unbearable,” “nauseating,” “depressing and nerve-racking,” “absolute hell,” and “an ice pick to the brain.” “If you didn’t know they were talking about noise, you might think they were describing some sort of assault,” Erica Walker, an environmental-health researcher at Boston University, has said. This has spurred scientists, physicians, activists, public officials, and, albeit less in the United States, lawmakers to join in the quest for quiet, which is far more elusive than it may seem. “Quiet places,” says the acoustic ecologist Gordon Hempton, “have been on the road to extinction at a rate that far exceeds the extinction of species.” From April 2016: The future will be quiet Thallikar went hunting for the source of the sound. At first he canvassed the neighborhood by foot, setting out around 10 or 11 o’clock at night, once the thrum of traffic had quieted down. When these “noise patrols,” as he called them, yielded no answers, he expanded his perimeter—by bike, then by car. He’d pull over every few blocks to listen for the whine. The hum was everywhere: outside Building E of the Tri-City Baptist Church and the apartments in San Palacio; near the Extra Space Storage and the no perfect people allowed sign at Hope Covenant Church; ricocheting around the homes in Canopy Lane, Clemente Ranch, Stonefield, the Reserve at Stonefield. He’d go out multiple nights a week, for 10 minutes to an hour, taking notes on where the noise was loudest. The patrols dragged on—one week, two weeks, eight weeks—which led to spats with his wife, who wanted to know why he kept leaving the house so late at night.
Finally, as winter warmed into spring, Thallikar thought he’d identified the source of the whine: a gray, nearly windowless building about half a mile from his house. The two-story structure, which had the charm of a prison and the architectural panache of a shoebox, was clad in concrete and surrounded by chain-link and black-metal fences, plus a cinder-block wall. It belonged to a company called CyrusOne.
There was no thrill in this discovery, just simmering fear that the noise might get worse. Thallikar visited the city-planning clerk, multiple times. She said she couldn’t help and referred him to CyrusOne’s construction manager. Kept awake by the noise at 11 o’clock one Saturday night, Thallikar phoned the man, who protested that he was trying to sleep. “I’m trying to sleep too, dude!” Thallikar told him. When they spoke again the next day, the call ended abruptly, and without resolution.
According to CyrusOne’s website, the company’s Chandler campus offers Fortune 500 companies robust infrastructure for mission-critical applications. In other words, it’s a data center—a columbarium for thousands of servers that store data for access and processing from virtually anywhere in the world. When you check your bank balance or research a used car or book a hotel room, chances are decent that the information comes to you via one of the more than 40 CyrusOne data centers spread around the globe. CyrusOne houses servers belonging to nearly 1,000 companies, including Microsoft, Country Financial, Brink’s, Carfax, and nearly half of the Fortune 20.
Thallikar, wanting to confront the noise personally, made a surprise visit to CyrusOne. He found workers putting up a new building, but learned that the whine was unrelated to construction. It came from the chillers, a bulky assemblage of steel boxes and tubes permanently affixed to the sides of the two existing buildings. Servers, like humans, are happiest at temperatures between 60 and 90 degrees Fahrenheit, and the chillers were crucial in keeping the heat-generating machines comfortably cool as they worked. In the fall of 2014, around the time Thallikar started noticing the whine, CyrusOne had had room for 16 chillers. Now it was getting ready to add eight more. During a follow-up visit, Thallikar, who grew up in Bangalore and moved to Arizona in 1990 to study industrial engineering at Arizona State University, said he was informed by a worker at the site that immigrants like him should feel lucky to live in the U.S., noise be damned.
CyrusOne arrived in Chandler shortly before Thallikar did and broke ground two months after he closed on his home. For CyrusOne, Chandler was a “dream come true,” Kevin Timmons, the company’s chief technology officer, told me. The city essentially offered CyrusOne carte blanche to develop an area three times the size of Ellis Island into one of the nation’s largest data-storage complexes: 2 million square feet protected by biometric locks, steel-lined walls, bullet-resistant glass, and dual-action interlocking dry-pipe sprinkler systems. CyrusOne even has two of its own substations humming with enough energy (112 megawatts) to light up every home in Salt Lake City—or, more relevant to the matter at hand, to power several dozen 400- and 500-ton chillers. CyrusOne’s Chandler facility was not only the company’s most ambitious, but the biggest to realize its strategy of wooing clients through ultrafast, just-in-time construction. CyrusOne could now boast of being able to complete a building in 107 days—faster than customers could have their servers ready. “It literally put us on the map,” Timmons said.
Arizona attracts data centers the way Florida attracts plastic surgeons. The state has low humidity; proximity to California—where many users and customers are based—but without its earthquakes or energy prices; and, thanks to lobbying efforts by CyrusOne, generous tax incentives for companies that drop their servers there. Walk 10 minutes due north from CyrusOne’s Chandler complex, and you’ll reach two other data centers, with a third just down the road. Drive 15 minutes from there, and you’ll come across three more. Continue farther east past Wild West Paintball, and you’ll hit an Apple data center, which will soon be joined by a Google facility, plus another data center from CyrusOne. Forty-five minutes west of Thallikar’s home, Compass Datacenters is building on more than 225 acres of land, a plot three times the size of CyrusOne’s in Chandler.
By the summer of 2015, Thallikar had thrown himself into an aggressive campaign to quiet the hum. He went up and down the city’s chain of command, pleading for help. He emailed Chandler’s economic-development innovation manager, its economic-development specialist, and its economic-development director, who replied that Thallikar was the only resident to complain, but dutifully went out, twice, to listen for the high-pitched whine. He didn’t hear it. “I do not think I am imagining things here and wasting people’s time,” Thallikar wrote back, adding that he’d taken his family on his patrol, “and they too could hear the noise.” Thallikar emailed a news anchor, an executive producer, an editor, and several reporters at the local 12 News TV station, offering to help them “in experiencing the problem so they can relate to it.” He emailed the mayor and all five members of the Chandler city council. Multiple times. Then daily. “The noise gets louder in the night and enters our homes. And the streets are filled with it,” Thallikar wrote in one email. In another: “Just what will it take for one of you to respond to my emails.” He presented his case at a city-council meeting, requesting that a task force be formed to research and stop the whine. He acknowledged that he’d been told the sound seemed suspiciously similar to the buzz of traffic on the 202 freeway nearby.
Thallikar took his campaign to his homeowners’ association and to his neighbors. The response was tepid, though he did persuade one person to email the city. Thallikar reached out, again, to CyrusOne, and to the Chandler Police Department. Commander Gregg Jacquin promised to investigate, but suggested that Thallikar might have more success if he cooled it with all the emails to city officials, which were creeping into the high double digits. Thallikar started keeping a log of how the noise changed, hour to hour and day to day. It was getting louder, he was sure.
In the fall of 2015, Jacquin emailed Thallikar to say that he’d gone in search of the noise, but hadn’t heard it. “I am not making this up—even though I do not have the measurement numbers,” Thallikar wrote back. “The noise heard over the weekend starting on Saturday starting around 10 pm through Sunday was very very bad. I got a nervous headache, and had to take medications.” He never heard back from Jacquin. Before long, Thallikar began to contemplate selling his home.
Noise is a clever enemy.
It leaves no trace and vanishes when chased. It’s hard to measure or describe. It is also relative. “Sound is when you mow your lawn, noise is when your neighbor mows their lawn, and music is when your neighbor mows your lawn,” says Arjun Shankar, an acoustic consultant. Noise is also fiendishly difficult to legislate, though for nearly as long as humans have lived together, we have seen fit to try. The ancient Greeks of Sybaris are credited with introducing the first noise ordinance, in the eighth century b.c.
, banishing roosters as well as blacksmiths, carpenters, and other “noisy arts” from the city limits. In the United States, the appetite for noise control reached its apex in 1972, when President Richard Nixon enacted the country’s first federal statute specifically targeting noise pollution, which empowered the Environmental Protection Agency to quiet the country. Nine years later, the Reagan administration withdrew funding for the Environmental Protection Agency’s Office of Noise Abatement and Control, foisting responsibility back onto state and local governments. Since then, little has changed. “Unfortunately,” says New York City’s longtime noise czar, Arline Bronzaft, “the federal government is essentially out of the noise business.” In the ensuing decades, the war on noise has shifted to the margins—a loose flock of mom-and-pop organizers whose agitations have all the glitz and edge of a church bake sale. The mood on pro-quiet listservs skews defeatist, the general tone more support group than picket line. (The landing page for the Right to Quiet Society politely instructs newcomers, “If you did not like what you saw here, without telling us, you might consider leaving quietly.”) Anti-noise crusaders band together in ragtag crews united by geography or irritant. Depending on whether your trigger point concerns planes, trains, blowers, Jet Skis, dirt bikes, concerts, boom cars, cars, motorcycles, or Muzak, you might join ROAR (Residents Opposed to Airport Racket), HORN (Halt Outrageous Railroad Noise), BLAST (Ban Leaf Blowers and Save Our Town), CALM (Clean Alternative Landscaping Methods), HEAVEN (Healthier Environment Through Abatement of Vehicle Emission and Noise), CRASH (County Residents Against Speedway Havoc), Pipedown (“the campaign for freedom from piped music”), or roughly 150 other organizations with varying levels of activity. In the United States, one of the few emitter-agnostic groups with a national scope is Noise Free America, which has 51 local chapters, noise counselors on call, and, for four out of the past six years, a tradition of going to Washington, D.C., to petition lawmakers—the pinnacle of which was once getting to meet then–Minority Leader Nancy Pelosi’s deputy chief of staff.
On a recent Sunday morning, I joined Noise Free America’s founder and director, Ted Rueter, for what he billed as a “noise tour” of Brooklyn—a pilgrimage to some of the borough’s most sonorously grating street corners. Rueter, a 62-year-old political-science professor, met me at a Starbucks on Flatbush Avenue wearing khaki shorts, a pink polo shirt, and Bose noise-canceling headphones. He was joined by three New Yorkers concerned with the din of their neighborhoods: Manohar Kanuri, a former stock analyst who lives above the incessant beeping of construction and delivery trucks in Manhattan’s Battery Park City; Ashley, a 40-something who’s moved three times in an effort to escape thunderous parties; and Vivianne, a woman who lives with the constant staccato of honking livery cabs, dollar vans, and impatient drivers. (Ashley and Vivianne asked not to be identified by their real names.) For Rueter, who was in town from Durham, North Carolina, a tour of New York’s cacophony seemed to have the exotic thrill of going on safari. Kanuri, Ashley, and Vivianne had corresponded extensively online, but this was their first time meeting in person, and they appeared delighted at getting to bond with sympathetic ears. “We build coalition this way,” Kanuri said.
All three New Yorkers had tried tackling their noise issues through traditional avenues—the 311 nonemergency line (which receives more reports about noise than about any other issue), the local police, their city-council members, the public advocate, the mayor—but found the city unsympathetic, unresponsive, or ineffective. Before heading out on the noise tour, they sat in the Starbucks venting about the difficulties of catching emitters in the act and encouraging police to take action. Ashley had placed so many 311 calls that she worried about getting arrested, like a Bronx woman who was thrown in a holding cell on charges of entering false information in the public record after calling 44 times in 15 months—often to report her neighbors’ racket. Vivianne warned Ashley that the police had probably pegged her as a “serial complainer”—among anti-noise crusaders, a dreaded fate.
Noise codes tend to be either qualitative (prohibiting subjectively defined “disturbing” or “unreasonably loud” noise) or quantitative (defining, in measurable terms, what constitutes disturbing or unreasonably loud noise). New York City’s noise code, which is the latter, considers barking a nuisance only if a dog yaps for 10 minutes straight between the hours of 7 a.m. and 10 p.m., or for five minutes straight between the hours of 10 p.m. and 7 a.m. (Four and a half minutes of barking at 2 a.m. is, technically, permissible.) At night, restaurants can be fined if their music measures in excess of 42 decibels from inside a nearby apartment and seven decibels above the level of ambient street sounds.
Most ordinances correlate punishable noise with loudness, though if you’ve ever tried to sleep through a dripping faucet, you know that something can be quiet and still drive you up the wall. Research confirms that what makes a sound annoying is only partially whether it whispers or roars. The volume at which noise begins to irritate varies depending on the source—we tolerate trains at louder volumes than cars, and cars at louder volumes than planes—and its pitch, or frequency. (Humans can hear sounds between 20 and 20,000 hertz, which roughly ranges from the low-frequency thump of subwoofers to the high-frequency buzz of certain crickets.) We are more sensitive to mid-frequency sounds—voices, birdsong, squealing brakes, shrieking infants—and perceive these sounds as louder than they are. Contrary to the stereotype of the old man shaking his fist, age and gender are not necessarily strong predictors of annoyance.
Nor must noises be heard in order to harm. Earplugs may dull the whine of motorcycles chugging outside your bedroom, but they’re useless against the engines’ low-frequency rumble, which vibrates the windows, floors, and your chest, and is the type of sound that’s largely ignored in most official noise calculations. (Harley-Davidson, which considers that thudding a point of pride, tried to trademark the sound of its V-twin motorcycle engine, which its lawyer translated as “potato potato potato” said very fast.) When regulatory officials evaluate environmental noise—to determine, say, whether to soundproof schools near airport runways—their calculations emphasize the mid-frequency sounds to which our ears are most sensitive and discount the low-frequency sounds (think wind turbines, washing machines, kids galloping upstairs) that have been shown to travel farther and trigger stronger stress responses. “If you actually measured sound using the right metric, you’ll see that you’re harming a lot more people than you think you are,” says Walker, the environmental-health researcher, who is working with communities near flight paths and freeways to rethink how noise is quantified.
Years ago, the staff of a medical-equipment company became spooked by recurring sightings of a gray, spectral figure haunting their lab. One night, an engineer working late alone felt a chill pass through the room and, out of the corner of his eye, saw a soundless figure hovering beside him. When he wheeled around, no one was there. The next day, while adjusting one of the machines in the lab, he began to feel the same creeping unease. The poltergeist? A vibrating extractor fan, he realized. He published a paper on his ghost-busting, which concluded that the machine was emitting low-frequency sound waves: pulses of energy too low in frequency to be heard by humans, yet powerful enough to affect our bodies—comparable, he found, to the inaudible vibrations in a supposedly haunted cellar and in the long, windy hallways that appear in scary stories. In addition to causing shivering, sweating, difficulty breathing, and blurry vision as a result of vibrating eyeballs, low-frequency sounds can also, apparently, produce ghosts.
Read: City noise might be making you sick For two years, Thallikar complained to anyone who would listen and even to those who would not. Meanwhile, CyrusOne kept building. The company finished three new buildings and bought 29 more acres of land in Chandler, growing the site to more than 85 acres. In a press release , it congratulated itself for “ensuring CyrusOne maintains the largest data center campus in the Southwest and one of the largest in the United States,” and cheered plans to build a comparable facility in California.
Some nights, Thallikar couldn’t sleep at all. He started wearing earplugs during the day, and stopped spending time outdoors. He looked for excuses to leave town and, in the evenings, returned to his old neighborhood in Tempe to take his constitutionals there. As he drove home, he’d have a pit in his stomach. He couldn’t stop himself from making the noise a recurring conversation topic at dinner.
Not only was the whine itself agitating— EHHNNNNNNNN —but its constant drone was like a cruel mnemonic for everything that bothered him: his powerlessness, his sense of injustice that the city was ignoring its residents’ welfare, his fear of selling his home for a major loss because no one would want to live with the noise, his regret that his family’s haven (not to mention their biggest investment) had turned into a nightmare.
EHHNNN. EHHNNNNNNNNN. EHHNNNNNNNNNNNN.
He tried meditating. He considered installing new windows to dull the hum, or planting trees to block the noise. He researched lawyers. And he made one final appeal to the newly elected members of the Chandler city council.
Lo and behold, one wrote back, promising to look into the issue.
The council member followed up a few weeks later. “According to the chief, police had visited 16 times on the site and conducted investigations on your claim,” he wrote. “They found the noise level was not significant enough to cause an issue.” Thallikar contacted a real-estate agent. He would lose money, and he’d have to move to a smaller house, but by the end of 2017, he’d decided to sell his home.
To spend time with noise warriors is to become frustratingly attuned to every gurgle, squeal, clank, and creak. As I set out with Rueter and the three New Yorkers on the noise tour, the anonymous din of Flatbush Avenue splintered into a riotous skronk of bleating cars, rattling generators, and snarling planes. Sirens yowled and vents whistled; a motorcycle potato-potato-potato ed and a can skittered on the concrete.
R. Murray Schafer, a Canadian composer who, in the 1960s, pioneered the field of acoustic ecology, has advocated “soundwalks” as an activity that, even more effectively than ordinances, could curb noise pollution by making people more aware of their habitat’s acoustics. A soundwalk—during which you actively listen to the sonic demeanor of your surroundings—might involve tallying the number of car horns you hear in the course of an hour or scavenger-hunting for sounds with specific characteristics, like a buzz followed by a squeak. Schafer saw soundwalks as a way to address our sonological incompetence. Teach people to tune in to their soundscapes, and they will understand which sounds to preserve and which to eliminate, then act accordingly.
The first stop on our noise tour was, mercifully, a place of quiet. We gathered in silence around a small koi pond on the Brooklyn College campus. I forced myself to listen carefully. An air conditioner purred. Water burbled. A child hollered. “See, once a kid comes, that’s when the screaming starts,” Ashley said.
She and Kanuri discussed the inefficacy of earplugs and the pros and cons of analog versus digital white-noise machines. Ashley said she slept with three white-noise machines (which hardly makes her an exception among the sound-sufferers I met) and, because of a whistler in her office, had started wearing earplugs at work.
“Are you familiar with something called slow TV?” Kanuri asked Ashley. “It’s a sailboat that runs 10 hours, and all you hear is the ship breaking water. That’s it. Every now and then you’ll hear bruhhhhh —another ship that passes by. That’s it. It’s beautiful. It’s beautiful.
” Stéphane Pigeon, an audio-processing engineer based in Brussels, has become the Taylor Swift of white noise, traveling the world recording relaxing soundscapes for his website, myNoise.net, which offers its more than 15,000 daily listeners an encyclopedic compendium of noise-masking tracks that range from “Distant Thunder” to “Laundromat,” a listener request. (White noise, technically speaking, contains all audible frequencies in equal proportion. In the natural world, falling rain comes close to approximating this pan-frequency shhhhhh.
) Impulse noises, such as honking, barking, hammering, and snoring, are the hardest to mask, but Pigeon has tried: While traveling in the Sahara, he recorded “Berber Tent,” a myNoise hit designed to help snorees by harmonizing the gentle whoosh of wind, the burble of boiling water, and the low rattle of snoring. Because covering up a snorer’s brief, punchy HRROHN! is exceedingly difficult, “the goal is to try to persuade you that snoring could be a beautiful sound,” Pigeon told me.
After a few minutes at the pond, we reluctantly tore ourselves from the quiet to prowl Brooklyn’s streets for sounds. Farther north on Flatbush Avenue, encircled by lowing horns and a wheezing Mister Softee truck, Kanuri used his sound-meter app to measure the ambient noise—a disappointing 75.9 decibels, lower than everyone had thought but still more than 20 decibels above the threshold at which, per a 1974 EPA report , we get distracted or annoyed by sound. (Decibels, which measure volume, are logarithmic: Turn up a sound by 10 decibels, and most people will perceive its loudness as having doubled.) The soundscape shushed as we approached the stately brownstones near Prospect Park, then thumped to life again when we stopped for lunch at, of all places, Screamer’s Pizzeria. “Would it be possible during our short stay here to turn down the music?” Rueter asked a server.
Desperate ears call for desperate measures, and the noise-afflicted go to elaborate lengths to lower the volume. Kanuri taught himself to code so he could analyze New York City’s 311 data and correlate noise complaints with elective districts; he hoped he could hold politicians accountable. Having tried moving bedrooms and also apartments, Ashley is now moving across the country, to a suburb in the Southwest. I spoke with a New Yorker who, unable to afford a move, has been sleeping in her closet—armed with earplugs, headphones, an AC unit, a fan, and two white-noise machines. A Wisconsin man who’d re-insulated, re-drywalled, and re-windowed his home was ultimately offered sleeping medication and antidepressants. An apartment dweller in Beijing, fed up with the calisthenics of the kids upstairs, got revenge by attaching a vibrating motor to his ceiling that rattled the family’s floor. The gadget is available for purchase online, where you can also find Coat of Silence paint, AlphaSorb Bass Traps, the Noise Eater Isolation Foot, the Sound Soother Headband, and the Sonic Nausea Electronic Disruption Device, which promises, irresistibly, “inventive payback.” One might also run for president. Arline Bronzaft, the New York City noise czar, speculates that Donald Trump’s presidential campaign was motivated by his quest to quiet the aircraft that disrupted Mar-a-Lago’s “once serene and tranquil ambience”—so described in one of the lawsuits Trump filed in his 20-year legal battle against Palm Beach County. Six days after he was elected—and the Federal Aviation Administration shared plans to limit flights over his resort—a Trump spokesperson announced that he would abandon the lawsuit.
Scientists have yet to agree on a definition for noise sensitivity, much less determine why some individuals seem more prone to it, though there have been cases linking sensitivity to hearing loss. What is clear, however, is that sound, once noticed, becomes impossible to ignore. “Once you are bothered by a sound, you unconsciously train your brain to hear that sound,” Pigeon said. “That phenomenon just feeds itself into a diabolic loop.” Research suggests habituation, the idea that we’ll just “get used to it,” is a myth. And there is no known cure. Even for sufferers of tinnitus—an auditory affliction researchers understand far better than noise sensitivity—the most effective treatment that specialists can offer is a regimen of “standard audiological niceness”: listening to them complain and reassuring them the noise won’t kill them. Or, as one expert put it, “lending a nice ear.” From October 2019: Rebecca Giggs on why whale songs are getting deeper During the summer of 2017, Cheryl Jannuzzi, who lived a short drive from Thallikar, in Clemente Ranch, began to hear humming coming from somewhere behind her house. For a while, she’d had to endure the clang and beep of construction, but this was different—like an endlessly revving engine, or a jet warming up for takeoff.
Jannuzzi contacted the city, and was told that the complex directly across Dobson Road from her backyard was a data center. This was news to her, and she wasn’t sure what to make of it. “They’re just housing data,” she thought. “That shouldn’t be making so much noise.” Around Halloween, Jennifer Goehring started to notice a buzzing sound. It gave her headaches and kept her up at night, but her husband couldn’t hear it, and neither could her kids. She worried that she might be losing her mind. She began sleeping with sound machines and pillows over her head, and went to the doctor to be sure she didn’t have an ear infection. She didn’t.
Amy Weber was with her Bible-study group in her backyard when she became aware of a consistent tone that hummed above everyone’s voices. She and her husband, Steve, had heard the construction on Dobson Road for ages, but this whirring sound didn’t seem to stop, or change. They tried to identify it by process of elimination, even climbing out of bed one night to clear crud from their pool pump, which, they discovered, wasn’t turned on.
Eventually, through their own patrols, they identified the source. The week after Christmas, the Webers papered Clemente Ranch with flyers and created a website asking people if they’d been bothered by a “constant humming/whirring sound” coming from CyrusOne. Complaints from more than 120 people flowed in.
Thallikar heard about the Webers’ efforts from one of his neighbors, and on January 23, 2018, he went to their home for the standing-room-only inaugural meeting of the Dobson Noise Coalition. People complained about headaches, irritability, difficulty sleeping. Jannuzzi had tried to muffle the sound by installing thick wooden barn doors over her sliding glass doors, and another neighbor had mounted sound-absorbing acoustic board in her bedroom windows. For five years, you couldn’t have bought a house on Jannuzzi’s block, but now several of her neighbors were planning to move.
When it was Thallikar’s turn, the story of his three-year odyssey poured out: the sleepless nights, the feelings of being under attack, the unresponsive officials and unanswered emails. Jaws dropped. He wanted to know why no one else had spoken up earlier. “I think we all went through a period of ‘Maybe it’ll go away,’ ” said one neighbor. Others had assumed something was wrong with them, or else had struggled to trace the sound to its source.
The Dobson Noise Coalition jumped into action. Its members circulated a petition asking CyrusOne to stop its racket, which 317 people signed. They wrote to CyrusOne, twice, but heard nothing. They contacted Chandler officials—who were considerably more receptive to the group than they had been to Thallikar alone—and got the city manager to send CyrusOne’s CEO a certified letter requesting a “plan of action.” For weeks, CyrusOne responded with silence.
The nature of noise is shifting. Sonic gripes from the 18th and 19th centuries—church bells, carriage wheels, the hollering of street criers—sound downright charming to today’s ears. Since then, our soundscape has been overpowered by the steady roar of machines: a chorus of cars, planes, trains, pumps, drills, stereos, and turbines; of jackhammers, power saws, chain saws, cellphones, and car alarms, plus generators, ventilators, compressors, street sweepers, helicopters, mowers, and data centers, which are spreading in lockstep with our online obsession and racking up noise complaints along the way. Communities in France, Ireland, Norway, Canada, North Carolina, Montana, Virginia, Colorado, Delaware, and Illinois have all protested the whine of data centers. That’s to say nothing of what drones may bring. “The next century will do to the air what the 20th century did to the land, which is to put roads and noise everywhere,” Les Blomberg, the executive director of the nonprofit Noise Pollution Clearinghouse, told me. Noise, having emancipated itself from the human hand, is becoming autonomous and inexhaustible. Human noisemakers have to sleep, but our mechanical counterparts, which do not tire, die, or strain their vocal cords, can keep up a constant, inescapable clamor.
Study after study has reached the hardly earth-shattering conclusion that we largely prefer the sounds of nature to those of machines. A 2008 research project that played subjects 75 recordings, ranging from a cat’s meow to skidding tires, found the five most agreeable sounds to be running water, bubbling water, flowing water, a small waterfall, and a baby laughing. Other studies—echoing spa brochures—tell us that natural sounds promote relaxation.
And yet we’re muffling them with our racket, to the detriment of other species. The concentration of stress hormones in elk and wolf feces spikes when snowmobiles arrive , then returns to normal when the machines disappear; a similar pattern was observed for North Atlantic right whales subjected to the whine of ship traffic. (One bioacoustics researcher told The New York Times that the acoustic emissions of air guns, used to map the ocean floor, are creating a “living hell” for undersea creatures.) Birds in noisy habitats become screechier to make themselves heard above our din—sparrows that “used to sound like, say, George Clooney would now sound like Bart Simpson,” one ornithologist told a reporter—and this phenomenon has been linked to decreases in species diversity, bird populations, and tree growth.
Though data are scarce, the world appears to be growing louder. The National Park Service’s Natural Sounds and Night Skies Division, which sends researchers to measure the acoustics of the American outdoors, estimates that noise pollution doubles or triples every 30 years.
The EPA last measured our nation’s volume in 1981 ; assuming (generously) that our collective cacophony has remained constant, calculations from 2013 estimate that more than 145 million Americans are exposed to noise exceeding the recommended limits. In the absence of more recent surveys, the volume at which emergency vehicles shriek is telling, given that sirens must be loud enough to pierce the ambient noise level. According to measurements by R. Murray Schafer, a fire-engine siren from 1912 reached 88 to 96 decibels measured from 11 feet away, whereas by 1974, sirens’ screeches hit 114 decibels at the same distance—an increase in volume, he noted, of about half a decibel a year. The latest fire-engine sirens howl louder still: 123 decibels at 10 feet.
Not everyone bears the brunt of the din equally. Belying its dismissal as a country-club complaint, noise pollution in the U.S. tends to be most severe in poor communities, as well as in neighborhoods with more people of color. A 2017 paper found that urban noise levels were higher in areas with greater proportions of black, Asian, and Hispanic residents than in predominantly white neighborhoods. Urban areas where a majority of residents live below the poverty line were also subjected to significantly higher levels of nighttime noise, and the study’s authors warned that their findings likely underestimated the differences, given that many wealthy homeowners invest in soundproofing.
“If you want to access quietness, more and more you have to pay,” says Antonella Radicchi, an architect who helps map quiet spaces in cities. Radicchi believes access to quiet havens should be a right for every city dweller, not only the rich, who can afford to escape noise—via spas, silent yoga retreats, lush corporate campuses. For $6,450, not including airfare, you too can take a plane to a car to a motorboat to a canoe to a hiking trail to spend three days with a tour group along Ecuador’s Zabalo River, which was recently named the world’s first Wilderness Quiet Park. The designation was developed by the acoustic ecologist Gordon Hempton, who has crisscrossed the globe recording natural soundscapes and, through his nonprofit, Quiet Parks International, is on a mission to “save quiet.” The organization is developing standards to measure the quietness of parks, trails, hotels, and residential communities, and will offer accreditation to areas that are suitably silent. (The Zabalo River qualified for Wilderness Quiet Park status by having a noise-free interval of at least 15 minutes, during which no man-made sounds were audible.) Read: How noise pollution impairs learning I spoke with Hempton via Skype several days after he’d returned from the Zabalo River. He was tan, with close-cropped gray hair and a tattoo on each forearm—one, of a leaf, inspired by his most recent visit to the Zabalo and another, he said, by an epiphany during his first solo campout in the Amazon jungle. Like other quiet advocates, Hempton speaks with the calm confidence, parallel sentence structure, and hypnotic cadence of a guru. I asked him what he sees as the value of quiet. “The further we get into quiet, the further we discover who we are,” Hempton said. “When you speak from a quiet place, when you are quiet, you think differently. You are more uniquely yourself. You are not echoing advertisements. You are not echoing billboards. You are not echoing modern songs. You’re echoing where you were.” When I asked Hempton’s co-founder the same thing, he chided me: “That question itself comes from a noisy situation.” Before starting Quiet Parks International, Hempton launched an effort to preserve the sonic pristineness of the Hoh Rain Forest in Washington’s Olympic National Park. In 2005, Hempton could sit in the park for an hour without hearing man-made sounds—there was only the low, breathy whistle of the wind, the tap of rain on Sitka spruce, black-tailed deer crunching over felled hemlock, and marbled murrelets trilling. Today, thanks to an increase in flights from a naval air base, Hempton says the noise-free interval has dropped to 10 minutes.
This summer, I traveled to Chandler to hear the whine for myself. A few months after the creation of the Dobson Noise Coalition, CyrusOne emailed the group promising to be a “good neighbor” and said it would install “sound attenuation packages” on its chillers by October 2018. But that October came and went, and, the neighbors agreed, the noise was worse than ever.
So they kicked their efforts into high gear. In the 17 months since the Dobson Noise Coalition was founded, its members have consulted lawyers, filed police reports, gotten coverage in the local news, and met with Chandler’s chief of police. Armed with videos, written testimony, and detailed timelines, more than two dozen unsmiling neighbors dressed in red presented their grievances to the Chandler city council. That finally got them a meeting with CyrusOne.
In May, delegates from the Dobson Noise Coalition parleyed with delegates from CyrusOne, including an acoustic consultant the company had hired. According to his measurements, the whine of the chillers falls between 630 and 1,000 hertz—directly in the mid-frequency spectrum, the range our ears are most sensitive to—and is a pure-tone sound, widely considered exceptionally irritating. CyrusOne reiterated that it would spend $2 million wrapping each and every chiller in custom-made, mass-loaded vinyl blankets designed to lower the whine by 10 decibels. Any future chillers would also be swaddled.
Kevin Timmons, CyrusOne’s chief technology officer, took me on a golf-cart tour of the exterior of the mission-critical facility, of which no inside tours are permitted without a signed nondisclosure agreement. Even Timmons kept getting locked out of different quadrants and having to summon security guards for help. He first heard about the noise complaints in early 2018, and said the neighbors’ annoyance came as a surprise. “We were a little bit stunned for a number of months while we tried to figure out if this was real,” he told me. “And it was made clear to us that, whether real or imagined, it is something that we have to do something about.” He regretted not acting faster and worried that even after the seven-figure soundproofing, some people could never unhear the whine: “Once you hear an annoying sound, humans could actually start listening for that sound.” Recently, he told me, residents living near a CyrusOne data center in Dallas have started complaining about a hum.
The week I visited, CyrusOne had finished wrapping 24 of the now 56 chillers at the Chandler complex. The neighbors were split on whether the blankets helped, but they were unanimously livid that the city had allowed a data center in their backyard in the first place. They had a lot of questions about due diligence: What studies had been done? What measurements taken? None, I learned: Chandler’s city planners are not required to consider noise when issuing permits, nor did they. Plus, most of CyrusOne’s land was zoned for industrial use in 1983, 13 years before the closest homes, in Clemente Ranch, were built. The neighbors all knew the local noise code, chapter and verse—“No person shall disturb the peace, quiet and comfort of any neighborhood by creating therein any disturbing or unreasonably loud noise”—and demanded to know why CyrusOne hadn’t at the very least been cited, given that it was unquestionably disturbing their peace, quiet, and comfort.
I posed that question to Commander Edward Upshaw, a 33-year veteran of the Chandler Police Department, as we cruised the outskirts of the CyrusOne campus, a steady hum faintly audible over the rumble of late-afternoon traffic. “Issuing a citation and charging somebody with a crime for this level of noise? Not going to happen,” Upshaw said. We pulled over in Chuparosa Park and stood a few yards from the cinder-block wall that marked the outer edge of CyrusOne. “People sell radios that make white noise or waves that’s louder than this,” he said. “There’s people that pay for this! I don’t know what the issue is.” We drove inside Clemente Ranch. “If you called a New York police officer for this noise, tell me what would happen. Tell me! Tell me what would happen.” The following evening, I drove to Thallikar’s home, one in a row of tidy stucco houses bordered by saguaros and Jeep Wranglers. We sat in his living room next to a glass coffee table covered with folders and papers documenting his noise fight.
After teaming up with the Dobson Noise Coalition, Thallikar decided to hold off on selling his home. He was “cautiously optimistic,” but still wanted to know why the city allowed the “monstrosity,” with its “goddamned machines,” to escape punishment for disturbing the peace. He rejected the idea that anyone could judge the hum based on a short visit. “They are going there and sampling the problem,” Thallikar said. “I’m experiencing it day and night.” But he conceded that CyrusOne’s noise level was about 20 percent better than it had been, and he’d recently moved back into his master bedroom.
As CyrusOne had gotten quieter, though, Thallikar had noticed another, different whine. Through a new round of patrols, he’d traced it to GM Financial, which was equipped with its own platoon of chillers. He presented his findings to the city manager in a PowerPoint presentation, which identified as sources of “injurious noise pollution” chillers and generators at GM Financial; the Digital Realty data center around the corner from his home; and, potentially, the forthcoming Northrop Grumman complex. (Digital Realty and GM Financial said they were aware of the complaints but, after investigating, deemed no action necessary; the owner of Northrop Grumman’s building told me any noise concerns were “unfounded.”) Thallikar offered to take me on a listening tour of the injurious noise pollution, and we hopped into a road-worn Toyota Camry, which Thallikar steered to the GM Financial parking lot. We sidled up to a locked metal gate. “You hear this?” Thallikar said.
EHHNNNNNNNN , said something from within the enclosure. “I don’t know how many units they have inside. You hear this, right? In the evenings it becomes louder and louder.” Recommended Reading The Nastiest Feud in Science Bianca Bosker Why Technology Favors Tyranny Yuval Noah Harari What Apple Thought the iPhone Might Look Like in 1995 Adrienne LaFrance After a few other stops, we doubled back to concentrate on the area around CyrusOne. For more than an hour, we circled its campus, pulling over every so often. As the sun and traffic dropped, the intensity of the hum rose. The droning wasn’t loud, but it was noticeable. It became irritatingly noticeable as the sky dimmed to black, escalating from a wheezy buzz to a clear, crisp, unending whine.
“This is depressing,” Thallikar said as we stood on a sidewalk in Clemente Ranch. “Like somebody in pain, crying. Crying constantly and moaning in pain.” We were silent again and listened to the data center moaning. Which was also, in a sense, the sound of us living: the sound of furniture being purchased, of insurance policies compared, of shipments dispatched and deliveries confirmed, of security systems activated, of cable bills paid. In Forest City, North Carolina, where some Facebook servers have moved in, the whine is the sound of people liking, commenting, streaming a video of five creative ways to make eggs, uploading bachelorette-party photos. It’s perhaps the sound of Thallikar’s neighbor posting “Has anyone else noticed how loud it’s been this week?” to the Dobson Noise Coalition’s Facebook group. It’s the sound of us searching for pink-eye cures, or streaming porn, or checking the lyrics to “Old Town Road.” The sound is the exhaust of our activity. Modern life— EHHNNNNNNNN —humming along.
The hum had settled into a strong, unwavering refrain by the time Thallikar dropped me off at my hotel, which looked out over the CyrusOne campus. I could see a new building under construction, plus a lot for another building of equal size. Beyond that, just down the street from where Thallikar lived, was a bald patch of land with space for two more buildings. CyrusOne had room to add 96 more chillers, almost double the number whining now.
This article appears in the November 2019 print edition with the headline “The End of Silence.”
" |
290 | 2,022 | "The Next Pandemic Could Start With a Terrorist Attack - The Atlantic" | "https://www.theatlantic.com/science/archive/2022/02/pandemic-terrorist-attack-biowarfare/622067" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Next Pandemic Could Start With a Terrorist Attack Nations around the world should come together now to determine how best to protect humans from biowarfare.
In 1770, the German chemist Carl Wilhelm Scheele performed an experiment and noticed that he’d created a noxious gas. He named it “dephlogisticated muriatic acid.” We know it today as chlorine.
Two centuries later, another German chemist, Fritz Haber, invented a process to synthesize and mass-produce ammonia, which revolutionized agriculture by generating the modern fertilizer industry. He won the Nobel Prize in Chemistry in 1918. But that same research, combined with Scheele’s earlier discovery, helped create the chemical-weapons program that Germany used in World War I. This is an example of what’s known as the “dual-use dilemma,” in which scientific and technological research is intended for good, but can also, either intentionally or accidentally, be used for harm.
In both chemistry and physics, the dual-use dilemma has long been a concern, and it has led to international treaties limiting the most worrisome applications of problematic research. Because of the Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on Their Destruction (otherwise known as the Chemical Weapons Convention, or CWC), a treaty signed by 130 countries, many dangerous chemicals that are sometimes used in scientific or medical research have to be monitored and inspected.
One example is ricin, which is produced naturally in castor seeds and is lethal to humans in the tiniest amounts. A brief exposure in a mist or a few grains of powder can be fatal, so it is on the CWC list. Triethanolamine, which is used to treat ear infections and impacted earwax, and is an ingredient to thicken face creams and balance the pH of shaving foams, is listed as well because it can also be used to manufacture hydrazoic acid, otherwise known as mustard gas.
Similar international treaties, enforcement protocols, and agencies exist to monitor dual uses in chemistry, physics, and artificial intelligence. But synthetic biology—which seeks to design or redesign organisms on a molecular level for new purposes, making them adaptable to different environments or giving them different abilities—is so new that such treaties don’t yet exist for it, even though discussions about how to prevent harm have been happening for decades within the scientific community.
From the March 2022 issue: It’s your friends who break your heart In 2000, a team of researchers at the State University of New York at Stony Brook kicked off a two-year experiment to determine whether they could synthesize a live virus from scratch using only publicly available genetic information, off-the-shelf chemicals, and mail-order DNA. (The project was financed with $300,000 from the Defense Advanced Research Projects Agency, as part of a program to develop biowarfare countermeasures.) The researchers purchased short stretches of DNA and painstakingly pieced them together, using 19 additional markers to distinguish their synthetic virus from the natural strain they were attempting to reproduce.
They succeeded. On July 12, 2002—just after Americans had celebrated the first Fourth of July following the 9/11 terrorist attacks, when jittery millions were relieved that another horrific event hadn’t happened on that holiday—those scientists announced that they had re-created the poliovirus in their lab using code, material, and equipment that anyone, even al-Qaeda, could get their hands on. They’d made the virus to send a warning that terrorists might be making biological weapons and that bad actors no longer needed a live virus to weaponize a dangerous pathogen such as smallpox or Ebola.
Poliovirus is perhaps the most studied virus of all time, and at the time of the experiment samples of the virus were stored in labs around the world. The goal of this team’s work wasn’t to reintroduce poliovirus into the wild, but to learn how to synthesize viruses. It was the first time anyone had created this type of virus from scratch, and the Department of Defense hailed the team’s research as a massive technical achievement.
Knowing how to synthesize viral DNA helped the United States gain new insights into how viruses mutate, how they become immune to vaccines, and how they could be developed as weapons. And although creating a virus to study how it might be used as a bioweapon may sound legally questionable, the project didn’t violate any existing dual-use treaties, not even a 1972 treaty explicitly banning germ weapons, which outlaws manufacturing disease-producing agents—such as bacteria, viruses, and biological toxins—that could be used to harm people, animals, or plants.
Nonetheless, the scientific community was incensed. Intentionally making a “synthetic human pathogen” was “irresponsible,” J. Craig Venter, a geneticist and synthetic biology’s progenitor, said at the time. But this was no isolated incident. Consider what happened with smallpox.
The World Health Organization declared smallpox eradicated in 1979. This marked a major human achievement, because smallpox is a truly diabolical disease—extremely contagious, and with no known cure. It causes high fever, vomiting, severe stomachache, a red rash, and painful, yellowish, pus-filled domes all over the body, which start inside the throat, then spread to the mouth, cheeks, eyes, and forehead. As the virus tightens its grip, the rash spreads: to the soles of the feet, the palms of the hands, the crease in the buttocks, and all around the victim’s backside. Any movement pressures those lesions until they burst through nerves and skin, leaving behind a trail of thick fluid made of flaky, dead tissue and virus.
Only two known samples of natural smallpox exist: One is housed at the CDC, the other at the State Research Center of Virology and Biotechnology, in Russia. For years, security experts and scientists have debated whether to destroy those samples, because no one wants another global smallpox pandemic. That debate was made moot in 2018, when a research team at the University of Alberta, in Canada, synthesized horsepox, a previously extinct cousin of smallpox, in just six months, with DNA it had ordered online. The protocol for making horsepox would also work for smallpox.
The team published an in-depth explanation of how it synthesized the virus in PLOS One , a peer-reviewed, open-access scientific journal that anyone can read online.
The paper included the methodology the scientists used to resurrect horsepox along with best practices for those who wanted to repeat the experiment in their own lab. To the team’s credit, before publishing its research, its lead investigator followed scientific protocol and alerted the Canadian government. The team also disclosed its competing interests: One of the investigators was also the CEO and chairman of a company called Tonix Pharmaceuticals, a biotech company investigating novel approaches to neurological disorders; the company and the university had filed a U.S.-patent application for “synthetic chimeric poxviruses” a year earlier. No one—not the Canadian government, nor the journal’s editors—sent back a request for them to rescind the paper.
The poliovirus and horsepox experiments dealt with synthesizing viruses using technology designed for well-intentioned purposes. What scientists and security experts fear is different: terrorists not only synthesizing a deadly pathogen, but intentionally mutating it so that it gains strength, resilience, and speed. Scientists conduct such research in high-security containment labs, attempting to anticipate worst-case-scenario pathogens by creating and studying them. Ron Fouchier, a virologist at the Erasmus Medical Center, in Rotterdam, announced in 2011 that he’d successfully augmented the H5N1 bird-flu virus so that it could be transmitted from birds to humans, and then between people, as a new strain of deadly flu.
Before COVID-19, the H5N1 virus was the worst to hit our planet since the 1918 Spanish flu. At the time that Fouchier conducted his experiment, only 565 people were known to have been infected with H5N1, but it had a high mortality rate: 59 percent of those who’d been infected died. Fouchier had taken one of the most dangerous naturally occurring flu viruses we had ever encountered and made it even more lethal. He told fellow scientists that he’d “mutated the hell” out of H5N1 to make it airborne and therefore significantly more contagious. There was no H5N1 vaccine. The existing virus was already resistant to the antivirals approved for treatment. Fouchier’s discovery, which was funded in part by the U.S. government, scared scientists and security experts so much that, in an unprecedented move, the National Science Advisory Board for Biosecurity, within the National Institutes of Health, asked the journals Science and Nature to redact parts of his paper ahead of publication. They feared that some of the details and mutation data could enable a rogue scientist, hostile government, or group of terrorists to make their own hyper-contagious version of H5N1.
From the June 2020 issue: The prophecies of Q We’ve just lived through a global pandemic that no one wants to see replicated. We may have COVID-19 vaccines, but the path to endemicity is bumpy and will entail incalculable death and morbidity. Before we can even hope to eradicate SARS-CoV-2, as we eventually did with smallpox, there will be more mutations and many new strains. Some could affect the body in ways we’ve not yet seen or even imagined. We will continue to live with tremendous uncertainty over how and when the virus will further mutate.
Obviously, one would hope that virus research would be undertaken in a lab where fanatical adherence to safety and rigorous oversight policies were strictly enforced. Just before the WHO declared smallpox eradicated, a photographer named Janet Parker was working at a medical school in Birmingham, England. She developed a fever and body aches, and, a few days later, a red rash. At the time, she thought it was chicken pox. (That vaccine had not yet been developed.) The tiny, pimple-like dots she’d been expecting, however, developed into much bigger lesions, and they were full of a yellowish, milky fluid. As her condition worsened, doctors determined that she’d contracted smallpox, almost certainly from a sloppily managed high-security research lab inside the same building where she worked.
Parker, sadly, is now remembered as the last person known to have died from smallpox. Does the benefit of being able to accurately predict virus mutations outweigh the public risks of gain-of-function research (that is, research that involves intentionally mutating viruses to make them stronger, more transmissible, and more dangerous)? It depends on whom you ask.
Or, rather, which agency you ask. The NIH issued a series of biosafety guidelines for research on H5N1 and other flu viruses in 2013, but the guidelines were narrow and didn’t cover other kinds of viruses. The White House Office of Science and Technology Policy announced a new process to assess the risks and benefits of gain-of-function experiments in 2014. It included influenza along with the MERS and SARS viruses. But that new policy also halted existing studies intended to develop flu vaccines. So the government reversed course in 2017, when the National Science Advisory Board for Biosecurity determined that such research wouldn’t pose a risk to public safety. In 2019, the U.S. government said that it had resumed funding for—wait for it—a new round of gain-of-function experiments intended to make the H5N1 bird flu more transmissible again.
Meanwhile, this back-and-forth doesn’t stop bad actors from gaining access to open-source research papers and mail-order genetic material. When it comes to synthetic biology, security experts are particularly concerned about future dual-use issues. Traditional force protection—the security strategies to keep populations safe—won’t work against an adversary that has adapted gene products or designer molecules to use as bioweapons.
In an August 2020 paper published in the academic journal CTC Sentinel , which focuses on contemporary terrorism threats, Ken Wickiser, a biochemist and the associate dean of research at West Point, wrote: “As molecular engineering techniques of the synthetic biologists become more robust and widespread, the probability of encountering one or more of these threats is approaching certainty … The change to the threat landscape created by these techniques is rivaled only by the development of the atomic bomb.” In December 2017, the Trump administration released new guidelines clearing the way for government-funded gain-of-function projects intended not just to monitor for new potential pathogens, but to encourage the study of intentional gain-of-function mutations.
To other nations, this broadcasts a clear message: The United States is working on viral bioweapons. The last thing we need right now is a biological arms race. It’s worth noting that the companies that make vaccines haven’t publicly called for gain-of-function research or indicated that the research would assist them in ramping up supply chains for future vaccines.
Banning gain-of-function research isn’t tantamount to stopping work on synthetic viruses, vaccines, antivirals, or virus tests altogether. We are surrounded by viruses. They’re important and integral to our ecosystems. They can be harnessed for beneficial functions, which include precision antibiotics for hard-to-kill microbes, cancer treatments, and delivery vehicles for gene therapies. But we should monitor this type of work as closely as we monitor the development of nuclear technologies.
Countries typically come together during a crisis, not before one. It’s easy to agree on danger. It’s far harder to agree on a shared vision and a grand transformation. But countries could be encouraged to collaborate for public good because they have an overwhelming interest in, say, developing their bioeconomies instead of spending resources to create new tools for biowarfare.
One model is the Bretton Woods Agreement, a 1944 pact between the Allied nations of World War II that laid the foundation for a new global monetary system. Among the agreement’s provisions were plans to create two new organizations tasked with monitoring the new system and promoting economic growth: the World Bank and the International Monetary Fund. The Bretton Woods nations agreed to collaborate. If one country’s currency became too weak, the other countries would step in to help; if it was devalued beyond a certain point, the IMF would bail that country out.
They also agreed to avoid trade wars. But the IMF wouldn’t function like a global central bank. Instead it would operate as a sort of free library, from which its members could borrow when needed, while also being required to contribute to a pool of gold and currency to keep the system running. Eventually, the Bretton Woods system included 44 countries that came to consensus on regulating and promoting international trade.
The collaborative approach worked well because all members stood to gain or lose if they violated the compact. The Bretton Woods system was dissolved in the 1970s, but the IMF and the World Bank still provide a strong foundation for international currency exchange.
Instead of monitoring and regulating a global pool of money, the system I propose would govern the global pool of genetic data. Member nations would agree to use an immutable blockchain-based tracking system to record genetic sequences as well as standardized parts, orders, and products.
This kind of global system would require companies to screen synthetic gene orders against various DNA databases housing sequences of regulated pathogens and known toxins, and then authenticate buyers and record transactions in a public database.
From the September 2021 issue: How the bobos broke America The global pool of genetic data includes DNA, which reveals our most sensitive and personal secrets. Insurance companies, the police, and adversaries would be intensely interested in that information. At least 70 countries now maintain national DNA registries, some of which include data that were collected without gaining informed consent.
The current approach to national registries positions DNA as a policing tool while missing the opportunity to pool genetic data for globally scaled research projects that could benefit us all. A tiny country of just 1.3 million people demonstrates a better way forward.
From a fragile perch in Northern Europe, uncomfortably close to a hostile Russia, Estonia has built what has long been considered one of the world’s most advanced digital ecosystems. Its state-issued digital identity allows residents to safely handle online transactions with government authorities, tax and registration offices, and many other public and private services. Citizens have voted electronically since 2005, using their digital ID for authentication. That same digital ID serves as a backbone for Estonia’s health system, which connects citizens and their centrally stored personal health and medical records to doctors and health-care providers.
Estonia’s digital ecosystem also makes it easier to do data-intensive genetic research. The country’s Biobank includes genetic and health information for 20 percent of its adults, who consented to opt in to genetic-research programs. Estonia’s system offers them free genotyping and related education classes, which—bless the Estonian ethos—people actually attend. That digital-ID system also guarantees participants security and anonymity.
In a biotech Bretton Woods system, member countries could build a similar blockchain-based digital-ID system to create an unchangeable ledger of personal genomic data for research programs. Estonia’s model for informed consent is a good model for member nations of this proposed system.
Member nations would then contribute a percentage of their population’s genetic data into a global pool. Such a system would encourage responsible use and development of genetic data and encourage accountability. A standard system for genetic-sequence storage and retrieval would make audits easier and more scalable.
The stakes are unimaginably high because biology is unpredictable and tends to self-sustain, even when we don’t want it to. Already, new life-forms that never existed before in nature are in development. Some have been booted up from computer code to living cells and tissue. Evolution is evolving, and if we don’t get this next phase right, today’s harmless experimentation could result in tomorrow’s planetary-scale catastrophe.
This post is excerpted from Amy Webb’s book The Genesis Machine: Our Quest to Rewrite Life in the Age of Synthetic Biology.
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
" |
291 | 2,000 | "The Computers of Tomorrow - The Atlantic" | "https://www.theatlantic.com/magazine/archive/1964/05/the-computers-of-tomorrow/658239" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore The Mad Strangler of Boston Erle Stanley Gardner The High Cost of Writing Charles W. Morton Asphalt Jungle R. G. G. Price Zero Leonard Wolf Stop the Brush, I Want to Get Off John Avery Snyder The Pledge of Allegiance Frances Duncan A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Computers of Tomorrow In the past two decades, thousands of computers hare been applied successfully in various industries. How much more widespread will their use become? MARTIN GREENBERGER, who is associate professor at the School of Industrial Management of M.I.T.
, has been working with computers for fourteen years.
NINETEEN years ago, in the July, 1945, issue of the Atlantic, Vannevar Bush predicted that the “advanced arithmetical machines of the future” would be (a) electrical in nature, (b) far more versatile than accounting machines, (c) readily adapted for a wide variety of operations, (d) controlled by instructions, (e) exceedingly fast in complex computation, and (f) capable of recording results in reusable form.
Tens of thousands of computers have been perfected and successfully applied in the past two decades, and each one attests to the remarkable clarity of Dr. Bush’s vision. Few of his readers in 1945 could have imagined the major strides that were about to be made in computer technology. Dr. Bush himself was only extrapolating from the technology of the time in these particular predictions. He did not assume the concept of internally stored programming, described by John von Neumann the following year: nor did he bank on the perfection of electronic logic, magnetic cores, and transistors. Yet, in a functional sense, his predictions scored a virtual bull’s-eye.
Only a decade ago, in 1954, a UNI VAC was delivered to the General Electric Company in Louisville for business use. Up to that point, computers had been applied almost exclusively to scientific calculation. Quickly, payroll, inventory, and customer accounting became fair game. Today there are probably more than twenty thousand computers in use within the United States, and correspondingly large numbers are installed in many other countries around the world. Computers run at speeds of up to millions of operations per second, and do so with negligible rates of error. Their linguistic abilities have been broadened impressively through development of elaborate programming systems, and their memories can be virtually unlimited in size over a range of times of recall.
By achieving reliability along with capability, computers have won broad commercial acceptance. But what of the future? What can we expect as computers enter their third decade? Some conservatives have been predicting a deceleration of computer growth for at least five years now. Is there a plateau just over the horizon? Not if a recent turn in computer research is as significant as many of us believe it to be. General economic and political conditions permitting, this work will nourish a new wave of computer expansion. Computing services and establishments will begin to spread throughout every sector of American life, reaching into homes, offices, classrooms, laboratories, factories, and businesses of all kinds.
ANALOGY WITH ELECTRICITY The computing machine is fundamentally an extremely useful device. The service it provides has a kind of universality and generality not unlike that afforded by electric power. Electricity can be harnessed for any of a wide variety of jobs: running machinery, exercising control, transmitting information, producing sound, heat, and light. Symbolic computation can be applied to an equally broad range of tasks: routine numerical calculations, manipulation of textual data, automatic control of instrumentation, simulation of dynamic processes, statistical analyses, problem solving, game playing, information storage, retrieval, and display.
Within reasonable limits the user is assured that electrical energy will always be available to the extent required. Power failures and overloading are relatively infrequent. Ten years ago an analogous statement for computation would have been a misrepresentation. Error rates in the computer were precariously high, and service was uncertain by any standards. Today, however, improved components have all but eliminated reliability as a consideration in the use of computers. Overloading is still a problem, but this is mostly a consequence of burgeoning demand.
Where, then, does the analogy with electrical energy break down? Why has automatic compulation not pervaded industry as electricity has done? Is it simply a matter of time, or do the differences between the two, by their nature, enforce a permanent disparity? The first difference that comes to mind is cost. Three pennies keep a large electric light bulb burning all night, and they buy about thirty thousand additions or subtractions or other elementary computations at current large-computer rates (omitting overhead, communication, and programming expense). This is enough computation to balance a large number of monthly bank statements, and at face value seems to compare very favorably with the equivalent amount of electricity. Furthermore, the cost of computation has been decreasing steadily, whereas electric rates have been stable for over twenty years now.
But a complication arises when we try to distribute small chunks of computation widely on a regular basis. The electric utility finds it easy to accommodate numerous customers consuming as little as 1 kilowatt-hour or I watt-hour at a time. It does not even have to charge a premium for the privilege of using small chunks if the total monthly consumption of a customer is large enough.
Not so for computation, as indicated by present experiments with computer systems that share their time among a number of concurrent demands. These experiments, while demonstrating the feasibility of making a conventional computer accessible to many small remote users simultaneously, also demonstrate the sizable hidden cost of such service. Overhead in supervising user programs, as well as in shuffling them around memory, can increase actual costs to several times the figure implied by a naive analysis based on more conventional computer techniques. But today’s computers were not built to be time-shared. With a new generation of computers, overhead of the kind mentioned may shrink to relative insignificance.
Electrical power is immediately available as soon as it is requested, no matter how much power (up to predefined limits) is being drawn. In the timesharing experiments, on the other hand, some of the longer requests for computation are delayed excessively during periods of heavy demand. Certain classes of use can tolerate delay more than others, so it is not mandatory to eliminate it completely. Since the delay is caused largely by the heavy (free) loading on present time-shared systems, it is reasonable to expect alleviation of the problem, at least in the business world, not only from better computer systems but also from the institution of price schedules based on amount and type of use.
The analogy of automatic computation with electrical power is subject to three major qualifications. First, to get electricity, we simply reach over and flip on a switch or insert a plug into an outlet; computers, by contrast, seem complex, forbidding, and at a distance from most potential users, both in space and time. This condition has been improving, but much work remains to be done.
Second, a wide variety of appliances, bulbs, machinery, and miscellaneous electrical equipment has been invented and perfected to harness electrical power for its various uses; each piece of equipment has its function built right into it, and each couples to its power supply in more or less the same way. But the general-purpose computer performs almost its entire repertoire all by itself, once it has been programmed appropriately, and employs its terminal equipment primarily for the entrance, exit, or temporary storage of information. and for little else. The difference will diminish as more special-purpose terminals arc designed lor use in conjunction with large memories and fast processors. Whether it will ever disappear entirely is doubtful, but it is worth noting that the development of most electrical appliances came well after the realization of electrical distribution equipment.
Third, electricity is a relatively homogeneous product, produced centrally and transmitted without interruption and without intelligent guidance by the consumer. Computation, on the other hand, is dynamic in form, and its course is typically guided by action of the user. The two-way dialogue and information feedback characteristic of on-line computation is totally absent from the electrical side of the analogy.
These three qualifications by no means kill the dream of large utilities built around the service of computing systems, but they do raise interesting uncertainty about how this dream will materialize.
THE INFORMATION UTILITY The concept of an information-processing utility poses many questions. Will the role of information utilities be sufficiently extensive and cohesive to create a whole new industry? If so, will this industry consist of a single integrated utility, like American Telephone and Telegraph, or will there be numerous individual utilities, like Consolidated Edison and the Boston Gas Company? Will the design and manufacture of computing components, terminal equipment, and programming systems be accomplished by subsidiaries of the information utility, as in the telephone industry, or will there be a separate industry of independent private manufacturers, like General Electric and Westinghouse in today’s electrical equipment industry? Perhaps the most important question of all concerns the legal matter of government regulation. Will the information utility be a public utility, or will it be privately owned and operated? Will some large companies have their own information utilities, just as some companies today have their own generating plants? Central to all these questions is the matter of cost. Computation, like electricity and unlike oil, is not stored. Since its production is concurrent with its consumption, production capacity must provide for peak loads, and the cost of equipment per dollar of revenue can soar.
The high cost of capital equipment is a major reason why producers of electricity are public utilities instead of unregulated companies. A second reason is the extensive distribution network they require to make their product generally available. This network, once established, is geographically fixed and immovable. Wasteful duplication and proliferation of lines could easily result if there were no public regulation.
Given the advanced state of development of present communications lines, it is unlikely that information utilities will wish to invest in their own communication networks. This may be taken as an argument against the necessity for stiffing free competition and placing information utilities under public regulation; yet, there is another massive investment that the information utilities will not be able to sidestep as easily, if at all — namely, investment in the large programming systems required to supervise the operation of the information utility and provide its services. The information utility should be able to shift part of this burden to the shoulders of its customers, but it will have to bear responsibility itself for the design, maintenance, and modification of the core of the programming system. The vast potential magnitude of this system, plus the fact that its usefulness may not extend beyond the physical machinery for which it was constructed, plus the possibility of programming waste from having too many entries in the field, may tip the balance in favor of a regulated monopoly.
In summary, a very substantial amount of capital is needed in the development of information utilities, capital to furnish both equipment and programming. Thus, even if no new communication lines of a proprietary nature are required, the public-utility format may still prove to be the best answer. On the other hand, one very persuasive reason for the private-company format is the stimulating effect of free enterprise and competition on imagination and hard work — vital prerequisites for realization of the information utility.
Whichever way the balance tips, it is clear that information utilities will be enterprises of considerable size. If they form an industry of private companies, then the industry probably will be dominated by one or two firms of giant proportions. Logical candidates among existing companies include not only the large communication and computer enterprises, but also the big computer users.
BETTER THAN MONEY The organizational impact of the information utility will extend well beyond the one or two industries directly concerned. User industries, such as banking and retailing, may also be greatly affected. Suppose, for example, that businesses of all sizes have simple terminals linking them electronically to a central information exchange. Then each business can make instantaneous credit checks and offer its customers the convenience of universal credit cards. These cards, referred to by some as “money keys.” together with the simple terminals and information exchange, can all but eliminate the need for currency, checks, cash registers, sales slips, and making change. When the card is inserted in the terminal and the amount of the purchase keyed in, a record of the transaction is produced centrally and the customer’s balance is updated. A signal is transmitted to the terminal from the central exchange if the customer’s balance is not adequate for the sale. Positive credits to the customer’s account, such as payroll payments, benefits, dividends, and gifts are entered in a similar way. Periodic account statements are figured automatically and delivered to customers, perhaps directly to a private terminal for some, or by postal service for others.
Any number of variations on this theme are conceivable, up to and including the virtual disappearance of our traditional media for commerce. The savings resulting from eliminating the physical handling and flow of money, as well as the clearing and transfer of checks, would justify a considerable expenditure for electronic equipment.
Secondary benefits might include the semiautomatic preparation of income tax returns and the automation of most bill collection. Incidentally, we can look forward in the process to displacing another class of manual labor: miscellaneous thieves who prey on money. The increased possibilities for embezzlement through fraudulent accounting may attract Some of the resulting unemployed, but there are ways that tire computer can be deputized to police its own operation, quietly and without danger of corruption.
PERSONALIZED INSURANCE Insurance is another staid industry whose way of doing business could change more than some may realize. Insurance policies are sold by agents at present from a relatively fixed, relatively small number of plans formulated by the actuarial department of the insurance company. Suppose all the actuarial figures on which these plans are based, together with other relevant statistics, are brought together in the store of a central computing system, and on-line terminals are placed at the company’s field offices. Then there is no reason why policies cannot be custom-tailored to each prospect’s needs and characteristics as a regular service. Personalized insurance would have considerable marketing appeal, and offers several subtle advantages. At least one of the very large insurance companies is already taking steps in this direction. Equitable Life is reputed to be planning a telephone link of 114 typewriter terminals, located at field offices and operating departments, with a central computing system at the home office. The magnitude of the project is estimated at $12 million and 5 years’ duration.
With personalized insurance, the rates of premiums can be made to vary with the company’s changing inventory of policies and insureds. Thus, a continual control over aggregate risk can be maintained. Since premiums are based on a much more complete description of a prospect than at present, there is less need for grouping of essentially different risk categories into the same premium class. Approximately 50 percent of the insureds (the less risky half) would receive better rates from personalized insurance than from insurance offered by competing companies that operate with fixed plans. As a result, there would be a gradual drift of more profitable (less risky) customers over to personalized insurance, fihus, the rates could be made still more favorable, and the competitive margin would grow.
A final advantage of personalized insurance is the ease with which a customer can trade up or down. As the customer’s family expands, as his children approach college age, as they become selfsupporting, as he approaches retirement, and so on, his insurance requirements change. At any time he can go to the nearest personalized terminal and key in information on his current insurance portfolio and on the adjustments he wishes to make. Within minutes he receives an indication of the differential premium due or saved, and this permits him to decide whether to trade. An agent can act as intermediary if self-service turned out to be unprofitable; or the computer may be able to sell its own insurance policies via persuasive discourse with the customer.
COMPUTER-MANAGED MARKETS Certain people who are intimately familiar with the workings of the New York Stock Exchange see no reason why its entire operation cannot be automated. Their thoughts go well beyond the mechanization of quotations and reporting procedures that is currently in progress. These persons find no real need for the floor specialists, for example. They believe that the computer could be programmed to maintain at least as stable and fluid a market as the specialists maintain, and serve at least as well in the public interest. Readers of the recent SEC stalf study on the security markets will appreciate immediately some of the potential benefits of eliminating specialists, over and above the tangible savings in commissions and paper flow.
Every investor has a “seat” on the computerized exchange, and even brokers become dispensable (although they, like insurance agents, may remain as the most deep-rooted of present institutions). Transactions arc handled lay an information utility which feeds customer orders directly to the computer system, keeps book, makes a market, ancl collects commissions on each transaction. Similar arrangements are possible for the other security and commodity markets, regardless of size, as well as for bond trading, mutual-fund sales, and so on.
A St. Louis broker has suggested the formation of a National Trading Corporation to automate the quoting and trading of securities in the over-thecounter market. His proposal could provide a first step. Operation of the computerized security exchange ties in naturally with operation ol the central credit exchange. Translations on the security exchange can be preceded by checks on the appropriate accounts of the credit exchange and result in adjustments to these accounts. Margin allowances made as part of the normal operation of the credit exchange permit a tighter watch over excessive borrowing and other violations than is now possible.
Computer-managed markets working together with computer-regulated credit may sound more than a bit Orwellian, but the potential for good from this merger is enormous. Unregulated credit in the purchase of securities was one of the chief factors that contributed to the severe decline in stock prices ol May, 1962, just as heavy margin positions in the twenties scaled the lid on the 1929 debacle. With the information utility keeping a vastly expanded and mechanized Federal Reserve type of scrutiny and control over the flow of credit and the operation of markets, the United States could be within an arm’s length of stabilizing the behavior of its economy, an elusive goal that is almost as old as the economy itself.
INFORMATION, PLEASE The range of application of the information utility extends well beyond the few possibilities that have been sketched. It includes medical-information systems for hospitals and clinics, centralized traffic control for cities and highways, catalogue shopping from a convenience terminal at home, automatic libraries linked to home and office, integrated management-control systems for companies and factories, teaching consoles in the classroom, research consoles in the laboratory, design consoles in the engineering firm, editing consoles in the publishing office, computerized communities. Different subscribers to the same information utility will be able to use one another’s programs and facilities through intersubscriber arrangements worked out with the utility on a fee basis.
As more and more of these services are perfected, an increasing percentage of the day-to-day functioning of man, the economy, and society will become documented and mechanically recorded in easily accessible form. It will no longer be necessary to conduct costly surveys and door-to-door interviews to acquire data on consumer tastes or investment behavior, at times only to find that the data are inappropriate or anachronistic for the needs of research. Research investigators will specify their precise data requirements and will requisition custom studies from the files of the information utility. I he studies will be timely and current, and a great boon to analysts and simulators. As their use develops, these data studies will be invaluable for corporate decision-making and government planning, to the point where they may be woven into the very fabric of these processes. It is not a mere flight of fancy to anticipate the day when information automatically acquired during the operation of the information utility feeds directly into decision mechanisms that regulate the economy and the activity of companies.
The information service may be conducted by the information utility itself, by a subsidiary, or by one or more of the subscribers. The information service represents a profitable and natural fulfillment of the utility’s role and function. Revenue is created by the utility on both ends of the data line — for example, in the production of sales data, when the utility can charge for making a money transaction unnecessary; and again in the marketing of this same data, when the utility can charge for providing detailed information that would be costly and difficult to obtain any other way.
SIMULATION, PLEASE Among the chief potential users of custom information are persons engaged in simulation studies and dynamic modeling. Simulation is about the most promising approach known for the general analysis of complex systems and stochastic processes. On the operating level, it affords the user a way of asking the question, what if. The use of simulation by staff specialists, systems analysts, decision makers, social scientists, and others will markedly expand as the information utility makes powerfid computers and programming systems easily accessible.
Most users of simulation will not have the knowledge or desire to build their own models, especially as simulation starts being applied by line managers and operating personnel. Assistance in the formulation, adjustment, and validation of models will be provided by an on-line simulation center, joined by the information utility to both the users and the relevant information sources. Simulation service, like information, will be obtained by a procedure as simple as dialing a telephone number.
A simulation service could be of great value as a proving ground for development of an early form of information utility, and could provide a bootstrap for further refinement of the utility. Each contemplated service could be designed by successive approximations, simulated, and revised before it is instituted. This is especially important for a service such as the automated stock exchange, where design errors can cost millions of dollars and experiments on the real system are impractical. In addition, a working prototype of the exchange, displayed by the simulation service, could persuade the doubtful and the wary.
Barring unforeseen obstacles, an on-line interactive computer service, provided commercially by an information utility, may be as commonplace by 2000 a.d. as telephone service is today. By 2000 A.D. man should have a much better comprehension of himself and his system, not because he will be innately any smarter than he is today, but because he will have learned to use imaginatively the most powerful amplifier of intelligence yet devised.
" |
292 | 2,023 | "Inside the Biden White House as Kabul Fell - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2023/10/afghanistan-withdrawal-biden-decision/675116" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore I Never Called Her Momma Jenisha Watts The Bizarre Story Behind Shinzo Abe’s Assassination Robert F. Worth Hip-Hop’s Fiercest Critic Spencer Kornhaber The Final Days Franklin Foer The Man Who Became Uncle Tom Clint Smith A Warning From Another Time Jeffrey Goldberg A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Final Days Joe Biden was determined to get out of Afghanistan—no matter the cost.
This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic , Monday through Friday.
Sign up for it here.
August 1 August is the month when oppressive humidity causes the mass evacuation of official Washington. In 2021, White House Press Secretary Jen Psaki piled her family into the car for a week at the beach. Secretary of State Antony Blinken headed to the Hamptons to visit his elderly father. Their boss left for the leafy sanctuary of Camp David.
Explore the October 2023 Issue Check out more from this issue and find your next story to read.
They knew that when they returned, their attention would shift to a date circled at the end of the month. On August 31, the United States would officially complete its withdrawal from Afghanistan, concluding the longest war in American history.
The State Department didn’t expect to solve Afghanistan’s problems by that date. But if everything went well, there was a chance to wheedle the two warring sides into some sort of agreement that would culminate in the nation’s president, Ashraf Ghani, resigning from office, beginning an orderly transfer of power to a governing coalition that included the Taliban. There was even discussion of Blinken flying out, most likely to Doha, Qatar, to preside over the signing of an accord.
It would be an ending, but not the end. Within the State Department there was a strongly held belief: Even after August 31, the embassy in Kabul would remain open. It wouldn’t be as robustly staffed, but some aid programs would continue; visas would still be issued. The United States—at least not the State Department—wasn’t going to abandon the country.
There were plans for catastrophic scenarios, which had been practiced in tabletop simulations, but no one anticipated that they would be needed. Intelligence assessments asserted that the Afghan military would be able to hold off the Taliban for months, though the number of months kept dwindling as the Taliban conquered terrain more quickly than the analysts had predicted. But as August began, the grim future of Afghanistan seemed to exist in the distance, beyond the end of the month, not on America’s watch.
That grim future arrived disastrously ahead of schedule. What follows is an intimate history of that excruciating month of withdrawal, as narrated by its participants, based on dozens of interviews conducted shortly after the fact, when memories were fresh and emotions raw. At times, as I spoke with these participants, I felt as if I was their confessor. Their failings were so apparent that they had a desperate need to explain themselves, but also an impulse to relive moments of drama and pain more intense than any they had experienced in their career.
During those fraught days, foreign policy, so often debated in the abstract, or conducted from the sanitized remove of the Situation Room, became horrifyingly vivid. President Joe Biden and his aides found themselves staring hard at the consequences of their decisions.
Even in the thick of the crisis, as the details of a mass evacuation swallowed them, the members of Biden’s inner circle could see that the legacy of the month would stalk them into the next election—and perhaps into their obituaries. Though it was a moment when their shortcomings were on obvious display, they also believed it evinced resilience and improvisational skill.
And amid the crisis, a crisis that taxed his character and managerial acumen, the president revealed himself. For a man long caricatured as a political weather vane, Biden exhibited determination, even stubbornness, despite furious criticism from the establishment figures whose approval he usually craved. For a man vaunted for his empathy, he could be detached, even icy, when confronted with the prospect of human suffering.
When it came to foreign policy, Joe Biden possessed a swaggering faith in himself. He liked to knock the diplomats and pundits who would pontificate at the Council on Foreign Relations and the Munich Security Conference. He called them risk-averse, beholden to institutions, lazy in their thinking. Listening to these complaints, a friend once posed the obvious question: If you have such negative things to say about these confabs, then why attend so many of them? Biden replied, “If I don’t go, they’re going to get stale as hell.” From 12 years as the top Democrat on the Senate Foreign Relations Committee—and then eight years as the vice president—Biden had acquired a sense that he could scythe through conventional wisdom. He distrusted mandarins, even those he had hired for his staff. They were always muddying things with theories. One aide recalled that he would say, “You foreign-policy guys, you think this is all pretty complicated. But it’s just like family dynamics.” Foreign affairs was sometimes painful, often futile, but really it was emotional intelligence applied to people with names that were difficult to pronounce. Diplomacy, in Biden’s view, was akin to persuading a pain-in-the-ass uncle to stop drinking so much.
One subject seemed to provoke his contrarian side above all others: the war in Afghanistan. His strong opinions were grounded in experience. Soon after the United States invaded, in late 2001, Biden began visiting the country. He traveled with a sleeping bag; he stood in line alongside Marines, wrapped in a towel, waiting for his turn to shower.
On his first trip, in 2002, Biden met with Interior Minister Yunus Qanuni in his Kabul office, a shell of a building. Qanuni, an old mujahideen fighter, told him: We really appreciate that you have come here. But Americans have a long history of making promises and then breaking them. And if that happens again, the Afghan people are going to be disappointed.
Biden was jet-lagged and irritable. Qanuni’s comments set him off: Let me tell you, if you even think of threatening us … Biden’s aides struggled to calm him down.
In Biden’s moral code, ingratitude is a grievous sin. The United States had evicted the Taliban from power; it had sent young men to die in the nation’s mountains; it would give the new government billions in aid. But throughout the long conflict, Afghan officials kept telling him that the U.S. hadn’t done enough.
The frustration stuck with him, and it clarified his thinking. He began to draw unsentimental conclusions about the war. He could see that the Afghan government was a failed enterprise. He could see that a nation-building campaign of this scale was beyond American capacity.
As vice president, Biden also watched as the military pressured Barack Obama into sending thousands of additional troops to salvage a doomed cause. In his 2020 memoir, A Promised Land , Obama recalled that as he agonized over his Afghan policy, Biden pulled him aside and told him, “Listen to me, boss. Maybe I’ve been around this town for too long, but one thing I know is when these generals are trying to box in a new president.” He drew close and whispered, “Don’t let them jam you.” Biden developed a theory of how he would succeed where Obama had failed. He wasn’t going to let anyone jam him.
In early February 2021, now-President Biden invited his secretary of defense, Lloyd Austin, and the chairman of the Joint Chiefs of Staff, Mark Milley, into the Oval Office. He wanted to acknowledge an emotional truth: “I know you have friends you have lost in this war. I know you feel strongly. I know what you’ve put into this.” Over the years, Biden had traveled to military bases, frequently accompanied by his fellow senator Chuck Hagel. On those trips, Hagel and Biden dipped in and out of a long-running conversation about war. They traded theories on why the United States would remain mired in unwinnable conflicts. One problem was the psychology of defeat. Generals were terrified of being blamed for a loss, living in history as the one who waved the white flag.
It was this dynamic, in part, that kept the United States entangled in Afghanistan. Politicians who hadn’t served in the military could never summon the will to overrule the generals, and the generals could never admit that they were losing. So the war continued indefinitely, a zombie campaign. Biden believed that he could break this cycle, that he could master the psychology of defeat.
Biden wanted to avoid having his generals feel cornered—even as he guided them to his desired outcome. He wanted them to feel heard, to appreciate his good faith. He told Austin and Milley, “Before I make a decision, you’ll have a chance to look me in the eyes.” The date set out by the Doha Agreement, which the Trump administration had negotiated with the Taliban, was May 1, 2021. If the Taliban adhered to a set of conditions—engaging in political negotiations with the Afghan government, refraining from attacking U.S. troops, and cutting ties with terrorist groups—then the United States would remove its soldiers from the country by that date. Because of the May deadline, Biden’s first major foreign-policy decision—whether or not to honor the Doha Agreement—would also be the one he seemed to care most about. And it would need to be made in a sprint.
In the spring, after weeks of meetings with generals and foreign-policy advisers, National Security Adviser Jake Sullivan had the National Security Council generate two documents for the president to read. One outlined the best case for staying in Afghanistan; the other made the best case for leaving.
This reflected Biden’s belief that he faced a binary choice. If he abandoned the Doha Agreement, attacks on U.S. troops would resume. Since the accord had been signed, in February 2020, the Taliban had grown stronger, forging new alliances and sharpening plans. And thanks to the drawdown of troops that had begun under Donald Trump, the United States no longer had a robust-enough force to fight a surging foe.
Biden gathered his aides for one last meeting before he formally made his decision. Toward the end of the session, he asked Sullivan, Blinken, and Director of National Intelligence Avril Haines to leave the room. He wanted to talk with Austin and Milley alone.
Instead of revealing his final decision, Biden told them, “This is hard. I want to go to Camp David this weekend and think about it.” It was always clear where the president would land. Milley knew that his own preferred path for Afghanistan—leaving a small but meaningful contingent of troops in the country—wasn’t shared by the nation he served, or the new commander in chief. Having just survived Trump and a wave of speculation about how the U.S. military might figure in a coup, Milley was eager to demonstrate his fidelity to civilian rule. If Biden wanted to shape the process to get his preferred result, well, that’s how a democracy should work.
On April 14, Biden announced that he would withdraw American forces from Afghanistan. He delivered remarks explaining his decision in the Treaty Room of the White House, the very spot where, in the fall of 2001, George W. Bush had informed the public of the first American strikes against the Taliban.
Biden’s speech contained a hole that few noted at the time. It scarcely mentioned the Afghan people, with not even an expression of best wishes for the nation that the United States would be leaving behind. The Afghans were apparently only incidental to his thinking. (Biden hadn’t spoken with President Ghani until right before the announcement.) Scranton Joe’s deep reserves of compassion were directed at people with whom he felt a connection; his visceral ties were with American soldiers. When he thought about the military’s rank and file, he couldn’t help but project an image of his own late son, Beau. “I’m the first president in 40 years who knows what it means to have a child serving in a war zone,” he said.
Biden also announced a new deadline for the U.S. withdrawal, which would move from May 1 to September 11, the 20th anniversary of the attack that drew the United States into war. The choice of date was polemical. Although he never officially complained about it, Milley didn’t understand the decision. How did it honor the dead to admit defeat in a conflict that had been waged on their behalf? Eventually, the Biden administration pushed the withdrawal deadline forward to August 31, an implicit concession that it had erred.
But the choice of September 11 was telling. Biden took pride in ending an unhappy chapter in American history. Democrats might have once referred to Afghanistan as the “good war,” but it had become a fruitless fight. It had distracted the United States from policies that might preserve the nation’s geostrategic dominance. By leaving Afghanistan, Biden believed he was redirecting the nation’s gaze to the future: “We’ll be much more formidable to our adversaries and competitors over the long term if we fight the battles for the next 20 years, not the last 20.” August 6–9 In late June , Jake Sullivan began to worry that the Pentagon had pulled American personnel and materiel out of Afghanistan too precipitously. The rapid drawdown had allowed the Taliban to advance and to win a string of victories against the Afghan army that had caught the administration by surprise. Even if Taliban fighters weren’t firing at American troops, they were continuing to battle the Afghan army and take control of the countryside. Now they’d captured a provincial capital in the remote southwest—a victory that was disturbingly effortless.
Sullivan asked one of his top aides, Homeland Security Adviser Elizabeth Sherwood-Randall, to convene a meeting for Sunday, August 8, with officials overseeing the withdrawal. Contingency plans contained a switch that could be flipped in an emergency. To avoid a reprise of the fall of Saigon, with desperate hands clinging to the last choppers out of Vietnam, the government made plans for a noncombatant-evacuation operation, or NEO. The U.S. embassy would shut down and relocate to Hamid Karzai International Airport (or HKIA, as everyone called it). Troops, pre-positioned near the Persian Gulf and waiting at Fort Bragg, in North Carolina, would descend on Kabul to protect the airport. Military transport planes would haul American citizens and visa holders out of the country.
By the time Sherwood-Randall had a chance to assemble the meeting, the most pessimistic expectations had been exceeded. The Taliban had captured four more provincial capitals. General Frank McKenzie, the head of U.S. Central Command, filed a commander’s estimate warning that Kabul could be surrounded within about 30 days—a far faster collapse than previously predicted.
McKenzie’s dire warning did strangely little to alter plans. Sherwood-Randall’s group unanimously agreed that it was too soon to declare a NEO. The embassy in Kabul was particularly forceful on this point. The acting ambassador, Ross Wilson, wanted to avoid cultivating a sense of panic in Kabul, which would further collapse the army and the state. Even the CIA seconded this line of thinking.
August 12 At 2 a.m., Sullivan’s phone rang. It was Mark Milley. The military had received reports that the Taliban had entered the city of Ghazni, less than 100 miles from Kabul.
The intelligence community assumed that the Taliban wouldn’t storm Kabul until after the United States left, because the Taliban wanted to avoid a block‑by‑block battle for the city. But the proximity of the Taliban to the embassy and HKIA was terrifying. It necessitated the decisive action that the administration had thus far resisted. Milley wanted Sullivan to initiate a NEO. If the State Department wasn’t going to move quickly, the president needed to order it to. Sullivan assured him that he would push harder, but it would be two more days before the president officially declared a NEO.
With the passage of each hour, Sullivan’s anxieties grew. He called Lloyd Austin and told him, “I think you need to send someone with bars on his arm to Doha to talk to the Taliban so that they understand not to mess with an evacuation.” Austin agreed to dispatch General McKenzie to renew negotiations.
August 13 Austin convened a videoconference with the top civilian and military officials in Kabul. He wanted updates from them before he headed to the White House to brief the president.
Ross Wilson, the acting ambassador, told him, “I need 72 hours before I can begin destroying sensitive documents.” “You have to be done in 72 hours,” Austin replied.
The Taliban were now perched outside Kabul. Delaying the evacuation of the embassy posed a danger that Austin couldn’t abide. Thousands of troops were about to arrive to protect the new makeshift facility that would be set up at the airport. The moment had come to move there.
Abandoning an embassy has its own protocols; they are rituals of panic. The diplomats had a weekend, more or less, to purge the place : to fill its shredders, burn bins, and disintegrator with documents and hard drives. Anything with an American flag on it needed destroying so it couldn’t be used by the enemy for propaganda purposes.
Wisps of smoke would soon begin to blow from the compound—a plume of what had been classified cables and personnel files. Even for those Afghans who didn’t have access to the internet, the narrative would be legible in the sky.
August 14 On Saturday night , Antony Blinken placed a call to Ashraf Ghani. He wanted to make sure the Afghan president remained committed to the negotiations in Doha. The Taliban delegation there was still prepared to agree to a unity government, which it might eventually run, allocating cabinet slots to ministers from Ghani’s government. That notion had broad support from the Afghan political elite. Everyone, even Ghani, agreed that he would need to resign as part of a deal. Blinken wanted to ensure that he wouldn’t waver from his commitments and try to hold on to power.
Although Ghani said that he would comply, he began musing aloud about what might happen if the Taliban invaded Kabul prior to August 31. He told Blinken, “I’d rather die than surrender.” August 15 The next day , the presidential palace released a video of Ghani talking with security officials on the phone. As he sat at his imposing wooden desk, which once belonged to King Amanullah, who had bolted from the palace to avoid an Islamist uprising in 1929, Ghani’s aides hoped to project a sense of calm.
During the early hours, a small number of Taliban fighters eased their way to the gates of the city, and then into the capital itself. The Taliban leadership didn’t want to invade Kabul until after the American departure. But their soldiers had conquered territory without even firing a shot. In their path, Afghan soldiers simply walked away from checkpoints. Taliban units kept drifting in the direction of the presidential palace.
Rumors traveled more quickly than the invaders. A crowd formed outside a bank in central Kabul.
Nervous customers jostled in a chaotic rush to empty their accounts. Guards fired into the air to disperse the melee. The sound of gunfire reverberated through the nearby palace, which had largely emptied for lunch. Ghani’s closest advisers pressed him to flee. “If you stay,” one told him, according to The Washington Post , “you’ll be killed.” From the March 2022 issue: George Packer on America’s betrayal of Afghanistan This was a fear rooted in history. In 1996, when the Taliban first invaded Kabul, they hanged the tortured body of the former president from a traffic light. Ghani hustled onto one of three Mi‑17 helicopters waiting inside his compound, bound for Uzbekistan.
The New York Times Magazine later reported that the helicopters were instructed to fly low to the terrain, to evade detection by the U.S. military. From Uzbekistan, he would fly to the United Arab Emirates and an ignominious exile. Without time to pack, he left in plastic sandals, accompanied by his wife. On the tarmac, aides and guards grappled over the choppers’ last remaining seats.
When the rest of Ghani’s staff returned from lunch, they moved through the palace searching for the president, unaware that he had abandoned them, and their country.
At approximately 1:45 p.m., Ambassador Wilson went to the embassy lobby for the ceremonial lowering of the flag. Emotionally drained and worried about his own safety, he prepared to leave the embassy behind, a monument to his nation’s defeat.
Wilson made his way to the helicopter pad so that he could be taken to his new outpost at the airport, where he was told that a trio of choppers had just left the presidential palace. Wilson knew what that likely meant. By the time he relayed his suspicions to Washington, officials already possessed intelligence that confirmed Wilson’s hunch: Ghani had fled.
Jake Sullivan relayed the news to Biden, who exploded in frustration: Give me a break.
Later that afternoon, General McKenzie arrived at the Ritz-Carlton in Doha. Well before Ghani’s departure from power, the wizened Marine had scheduled a meeting with an old adversary of the United States, Mullah Abdul Ghani Baradar.
Baradar wasn’t just any Taliban leader. He was a co-founder of the group, with Mullah Mohammed Omar. McKenzie had arrived with the intention of delivering a stern warning. He barely had time to tweak his agenda after learning of Ghani’s exit.
McKenzie unfolded a map of Afghanistan translated into Pashto. A circle had been drawn around the center of Kabul—a radius of about 25 kilometers—and he pointed to it. He referred to this area as the “ring of death.” If the Taliban operated within those 25 kilometers, McKenzie said, “we’re going to assume hostile intent, and we’ll strike hard.” McKenzie tried to bolster his threat with logic. He said he didn’t want to end up in a firefight with the Taliban, and that would be a lot less likely to happen if they weren’t in the city.
Baradar not only understood; he agreed. Known as a daring military tactician, he was also a pragmatist. He wanted to transform his group’s inhospitable image; he hoped that foreign embassies, even the American one, would remain in Kabul. Baradar didn’t want a Taliban government to become a pariah state, starved of foreign assistance that it badly needed.
But the McKenzie plan had an elemental problem: It was too late. Taliban fighters were already operating within the ring of death. Kabul was on the brink of anarchy. Armed criminal gangs were already starting to roam the streets. Baradar asked the general, “Are you going to take responsibility for the security of Kabul?” McKenzie replied that his orders were to run an evacuation. Whatever happens to the security situation in Kabul, he told Baradar, don’t mess with the evacuation, or there will be hell to pay.
It was an evasive answer. The United States didn’t have the troops or the will to secure Kabul. McKenzie had no choice but to implicitly cede that job to the Taliban.
Baradar walked toward a window. Because he didn’t speak English, he wanted his adviser to confirm his understanding. “Is he saying that he won’t attack us if we go in?” His adviser told him that he had heard correctly.
As the meeting wrapped up, McKenzie realized that the United States would need to be in constant communication with the Taliban. They were about to be rubbing shoulders with each other in a dense city. Misunderstandings were inevitable. Both sides agreed that they would designate a representative in Kabul to talk through the many complexities so that the old enemies could muddle together toward a common purpose.
Soon after McKenzie and Baradar ended their meeting, Al Jazeera carried a live feed from the presidential palace , showing the Taliban as they went from room to room, in awe of the building, seemingly bemused by their own accomplishment.
They gathered in Ghani’s old office, where a book of poems remained on his desk, across from a box of Kleenex. A Talib sat in the president’s Herman Miller chair. His comrades stood behind him in a tableau, cloth draped over the shoulders of their tunics, guns resting in the crooks of their arms, as if posing for an official portrait.
August 16 The U.S. embassy , now relocated to the airport, became a magnet for humanity. The extent of Afghan desperation shocked officials back in Washington. Only amid the panicked exodus did top officials at the State Department realize that hundreds of thousands of Afghans had fled their homes as civil war swept through the countryside—and made their way to the capital.
The runway divided the airport into halves.
A northern sector served as a military outpost and, after the relocation of the embassy, a consular office—the last remaining vestiges of the United States and its promise of liberation. A commercial airport stared at these barracks from across the strip of asphalt.
The commercial facility had been abandoned by the Afghans who worked there. The night shift of air-traffic controllers simply never arrived. The U.S. troops whom Austin had ordered to support the evacuation were only just arriving. So the terminal was overwhelmed. Afghans began to spill onto the tarmac itself.
The crowds arrived in waves. The previous day, Afghans had flooded the tarmac late in the day, then left when they realized that no flights would depart that evening. But in the morning, the compound still wasn’t secure, and it refilled.
In the chaos, it wasn’t entirely clear to Ambassador Wilson who controlled the compound. The Taliban began freely roaming the facility, wielding bludgeons, trying to secure the mob. Apparently, they were working alongside soldiers from the old Afghan army. Wilson received worrying reports of tensions between the two forces.
The imperative was to begin landing transport planes with equipment and soldiers. A C‑17, a warehouse with wings, full of supplies to support the arriving troops, managed to touch down.
The crew lowered a ramp to unload the contents of the jet’s belly, but the plane was rushed by a surge of civilians. The Americans on board were no less anxious than the Afghans who greeted them. Almost as quickly as the plane’s back ramp lowered, the crew reboarded and resealed the jet’s entrances. They received permission to flee the uncontrolled scene.
But they could not escape the crowd, for whom the jet was a last chance to avoid the Taliban and the suffering to come. As the plane began to taxi, about a dozen Afghans climbed onto one side of the jet. Others sought to stow away in the wheel well that housed its bulging landing gear. To clear the runway of human traffic, Humvees began rushing alongside the plane. Two Apache helicopters flew just above the ground, to give the Afghans a good scare and to blast the civilians from the plane with rotor wash.
Only after the plane had lifted into the air did the crew discover its place in history. When the pilot couldn’t fully retract the landing gear, a member of the crew went to investigate, staring out of a small porthole. Through the window, it was possible to see scattered human remains.
Videos taken from the tarmac instantly went viral.
They showed a dentist from Kabul plunging to the ground from the elevating jet. The footage evoked the photo of a man falling to his death from an upper story of the World Trade Center—images of plummeting bodies bracketing an era.
Over the weekend, Biden had received briefings about the chaos in Kabul in a secure conference room at Camp David. Photographs distributed to the press showed him alone , talking to screens, isolated in his contrarian faith in the righteousness of his decision. Despite the fiasco at the airport, he returned to the White House, stood in the East Room, and proclaimed : “If anything, the developments of the past week reinforced that ending U.S. military involvement in Afghanistan now was the right decision. American troops cannot and should not be fighting in a war and dying in a war that Afghan forces are not willing to fight for themselves.” August 17 John Bass was having a hard time keeping his mind on the task at hand. From 2017 to 2020, he had served as Washington’s ambassador to Afghanistan. During that tour, Bass did his best to immerse himself in the country and meet its people. He’d planted a garden with a group of Girl Scouts and Boy Scouts and hosted roundtables with journalists.
When his term as ambassador ended, he left behind friends, colleagues, and hundreds of acquaintances.
Now Bass kept his eyes on his phone, checking for any word from his old Afghan network. He moved through his day dreading what might come next.
Yet he also had a job that required his attention. The State Department had assigned him to train future ambassadors. In a seminar room in suburban Virginia, he did his best to focus on passing along wisdom to these soon‑to‑be emissaries of the United States.
As class was beginning, his phone lit up. Bass saw the number of the State Department Operations Center. He apologized and stepped out to take the call.
“Are you available to talk to Deputy Secretary Sherman?” The familiar voice of Wendy Sherman, the No. 2 at the department, came on the line. “I have a mission for you. You must take it, and you need to leave today.” Sherman then told him: “I’m calling to ask you to go back to Kabul to lead the evacuation effort.” Ambassador Wilson was shattered by the experience of the past week and wasn’t “able to function at the level that was necessary” to complete the job on his own. Sherman needed Bass to help manage the exodus.
Bass hadn’t expected the request. In his flummoxed state, he struggled to pose the questions he thought he might later regret not having asked.
“How much time do we have?” “Probably about two weeks, a little less than two weeks.” “I’ve been away from this for 18 months or so.” “Yep, we know, but we think you’re the right person for this.” Bass returned to class and scooped up his belongings. “With apologies, I’m going to have to take my leave. I’ve just been asked to go back to Kabul and support the evacuations. So I’ve got to say goodbye and wish you all the best, and you’re all going to be great ambassadors.” Because he wasn’t living in Washington, Bass didn’t have the necessary gear with him. He drove straight to the nearest REI in search of hiking pants and rugged boots. He needed to pick up a laptop from the IT department in Foggy Bottom. Without knowing much more than what was in the news, Bass rushed to board a plane taking him to the worst crisis in the recent history of American foreign policy.
August 19–25 About 30 hours later —3:30 a.m., Kabul time—Bass touched down at HKIA and immediately began touring the compound. At the American headquarters, he ran into the military heads of the operation, whom he had worked with before. They presented Bass with the state of play. The situation was undeniably bizarre: The success of the American operation now depended largely on the cooperation of the Taliban.
The Americans needed the Taliban to help control the crowds that had formed outside the airport—and to implement systems that would allow passport and visa holders to pass through the throngs. But the Taliban were imperfect allies at best. Their checkpoints were run by warriors from the countryside who didn’t know how to deal with the array of documents being waved in their faces. What was an authentic visa? What about families where the father had a U.S. passport but his wife and children didn’t? Every day, a new set of Taliban soldiers seemed to arrive at checkpoints, unaware of the previous day’s directions. Frustrated with the unruliness, the Taliban would sometimes simply stop letting anyone through.
Abdul Ghani Baradar’s delegation in Doha had passed along the name of a Taliban commander in Kabul—Mawlawi Hamdullah Mukhlis. It had fallen to Major General Chris Donahue, the head of the 82nd Airborne Division, out of Fort Bragg, to coordinate with him. On September 11, 2001, Donahue had been an aide to the vice chairman of the Joint Chiefs, Richard Myers, and had been with him on Capitol Hill when the first plane struck the World Trade Center.
Donahue told Pentagon officials that he had to grit his teeth as he dealt with Mukhlis. But the Taliban commander seemed to feel a camaraderie with his fellow soldier. He confided to Donahue his worry that Afghanistan would suffer from brain drain, as the country’s most talented minds evacuated on American airplanes.
In a videoconference with Mark Milley, back at the Pentagon, Donahue recounted Mukhlis’s fears. According to one Defense Department official in the meeting, his description caused Milley to laugh.
“Don’t be going local on me, Donahue,” he said.
“Don’t worry about me, sir,” Donahue responded. “I’m not buying what they are selling.” After Bass left his meeting with the military men, including Donahue, he toured the gates of the airport, where Afghans had amassed. He was greeted by the smell of feces and urine, by the sound of gunshots and bullhorns blaring instructions in Dari and Pashto. Dust assaulted his eyes and nose. He felt the heat that emanated from human bodies crowded into narrow spaces.
The atmosphere was tense. Marines and consular officers, some of whom had flown into Kabul from other embassies, were trying to pull passport and visa holders from the crowd. But every time they waded into it, they seemed to provoke a furious reaction. To get plucked from the street by the Americans smacked of cosmic unfairness to those left behind. Sometimes the anger swelled beyond control, so the troops shut down entrances to allow frustrations to subside. Bass was staring at despair in its rawest form. As he studied the people surrounding the airport, he wondered if he could ever make any of this a bit less terrible.
Bass cadged a room in barracks belonging to the Turkish army, which had agreed, before the chaos had descended, to operate and protect the airport after the Americans finally departed. His days tended to follow a pattern. They would begin with the Taliban’s grudging assistance. Then, as lunchtime approached, the Talibs would get hot and hungry. Abruptly, they would stop processing evacuees through their checkpoints. Then, just as suddenly, at six or seven, as the sun began to set, they would begin to cooperate again.
Bass was forever hatching fresh schemes to satisfy the Taliban’s fickle requirements. One day, the Taliban would let buses through without question; the next, they would demand to see passenger manifests in advance. Bass’s staff created official-looking placards to place in bus windows. The Taliban waved them through for a short period, then declared the placard system unreliable.
Throughout the day, Bass would stop what he was doing and join videoconferences with Washington. He became a fixture in the Situation Room. Biden would pepper him with ideas for squeezing more evacuees through the gates. The president’s instinct was to throw himself into the intricacies of troubleshooting.
Why don’t we have them meet in parking lots? Can’t we leave the airport and pick them up? Bass would kick around Biden’s proposed solutions with colleagues to determine their plausibility, which was usually low. Still, he appreciated Biden applying pressure, making sure that he didn’t overlook the obvious.
At the end of his first day at the airport, Bass went through his email. A State Department spokesperson had announced Bass’s arrival in Kabul. Friends and colleagues had deluged him with requests to save Afghans. Bass began to scrawl the names from his inbox on a whiteboard in his office. By the time he finished, he’d filled the six-foot‑by‑four-foot surface. He knew there was little chance that he could help. The orders from Washington couldn’t have been clearer. The primary objective was to load planes with U.S. citizens, U.S.-visa holders, and passport holders from partner nations, mostly European ones.
In his mind, Bass kept another running list, of Afghans he had come to know personally during his time as ambassador who were beyond his ability to rescue. Their faces and voices were etched in his memory, and he could be sure that, at some point when he wasn’t rushing to fill C‑17s, they would haunt his sleep.
“Someone on the bus is dying.” Jake Sullivan was unnerved. What to do with such a dire message from a trusted friend? It described a caravan of five blue-and-white buses stuck 100 yards outside the south gate of the airport, one of them carrying a human being struggling for life. If Sullivan forwarded this problem to an aide, would it get resolved in time? Sullivan sometimes felt as if every member of the American elite was simultaneously asking for his help. When he left secure rooms, he would grab his phone and check his personal email accounts, which overflowed with pleas.
This person just had the Taliban threaten them. They will be shot in 15 hours if you don’t get them out.
Some of the senders seemed to be trying to shame him into action.
If you don’t do something, their death is on your hands.
Throughout late August, the president himself was fielding requests to help stranded Afghans, from friends and members of Congress. Biden became invested in individual cases. Three buses of women at the Kabul Serena Hotel kept running into logistical obstacles. He told Sullivan, “I want to know what happens to them. I want to know when they make it to the airport.” When the president heard these stories, he would become engrossed in solving the practical challenge of getting people to the airport, mapping routes through the city.
From the September 2022 issue: “I smuggled my laptop past the Taliban so I could write this story” When Wendy Sherman, the deputy secretary of state, went to check in with members of a task force working on the evacuation, she found grizzled diplomats in tears. She estimated that a quarter of the State Department’s personnel had served in Afghanistan. They felt a connection with the country, an emotional entanglement. Fielding an overwhelming volume of emails describing hardship cases, they easily imagined the faces of refugees. They felt the shame and anger that come with the inability to help. To deal with the trauma, the State Department procured therapy dogs that might ease the staff’s pain.
The State Department redirected the attention of its sprawling apparatus to Afghanistan. Embassies in Mexico City and New Delhi became call centers. Staff in those distant capitals assumed the role of caseworkers, assigned to stay in touch with the remaining American citizens in Afghanistan, counseling them through the terrifying weeks.
Sherman dispatched her Afghan-born chief of staff, Mustafa Popal, to HKIA to support embassy workers and serve as an interpreter. All day long, Sherman responded to pleas for help: from foreign governments’ representatives, who joined a daily videoconference she hosted; from members of Congress; from the cellist Yo‑Yo Ma, writing on behalf of musicians. Amid the crush, she felt compelled to go down to the first floor, to spend 15 minutes cuddling the therapy dogs.
The Biden administration hadn’t intended to conduct a full-blown humanitarian evacuation of Afghanistan. It had imagined an orderly and efficient exodus that would extend past August 31, as visa holders boarded commercial flights from the country. As those plans collapsed, the president felt the same swirl of emotions as everyone else watching the desperation at the airport. Over the decades, he had thought about Afghanistan using the cold logic of realism—it was a strategic distraction, a project whose costs outweighed the benefits. Despite his many visits, the country had become an abstraction in his mind. But the graphic suffering in Kabul awakened in him a compassion that he’d never evinced in the debates about the withdrawal.
After seeing the abject desperation on the HKIA tarmac, the president had told the Situation Room that he wanted all the planes flying thousands of troops into the airport to leave filled with evacuees. Pilots should pile American citizens and Afghans with visas into those planes. But there was a category of evacuees that he now especially wanted to help, what the government called “Afghans at risk.” These were the newspaper reporters, the schoolteachers, the filmmakers, the lawyers, the members of a girls’ robotics team who didn’t necessarily have paperwork but did have every reason to fear for their well-being in a Taliban-controlled country.
This was a different sort of mission. The State Department hadn’t vetted all of the Afghans at risk. It didn’t know if they were genuinely endangered or simply strivers looking for a better life. It didn’t know if they would have qualified for the visas that the administration said it issued to those who worked with the Americans, or if they were petty criminals. But if they were in the right place at the right time, they were herded up the ramp of C‑17s.
In anticipation of an evacuation, the United States had built housing at Camp As Sayliyah , a U.S. Army base in the suburbs of Doha. It could hold 8,000 people, housing them as the Department of Homeland Security collected their biometric data and began to vet them for immigration. But it quickly became clear that the United States would fly far more than 8,000 Afghans to Qatar.
As the numbers swelled, the United States set up tents at Al Udeid Air Base, a bus ride away from As Sayliyah. Nearly 15,000 Afghans took up residence there, but their quarters were poorly planned.
There weren’t nearly enough toilets or showers. Procuring lunch meant standing in line for three or four hours. Single men slept in cots opposite married women, a transgression of Afghan traditions.
The Qataris, determined to use the crisis to burnish their reputation, erected a small city of air-conditioned wedding tents and began to cater meals for the refugees. But the Biden administration knew that the number of evacuees would soon exceed Qatar’s capacity. It needed to erect a network of camps. What it created was something like the hub-and-spoke system used by commercial airlines. Refugees would fly into Al Udeid and then be redirected to bases across the Middle East and Europe, what the administration termed “lily pads.” In September, just as refugees were beginning to arrive at Dulles International Airport, outside Washington, D.C., four Afghan evacuees caught the measles.
All the refugees in the Middle East and Europe now needed vaccinations, which would require 21 days for immunity to take hold. To keep disease from flying into the United States, the State Department called around the world, asking if Afghans could stay on bases for three extra weeks.
In the end, the U.S. government housed more than 60,000 Afghans in facilities that hadn’t existed before the fall of Kabul. It flew 387 sorties from HKIA.
At the height of the operation, an aircraft took off every 45 minutes.
A terrible failure of planning necessitated a mad scramble—a mad scramble that was an impressive display of creative determination.
Even as the administration pulled off this feat of logistics, it was pilloried for the clumsiness of the withdrawal.
The New York Times ’ David Sanger had written , “After seven months in which his administration seemed to exude much-needed competence—getting more than 70 percent of the country’s adults vaccinated, engineering surging job growth and making progress toward a bipartisan infrastructure bill—everything about America’s last days in Afghanistan shattered the imagery.” Biden didn’t have time to voraciously consume the news, but he was well aware of the coverage, and it infuriated him. It did little to change his mind, though. In the caricature version of Joe Biden that had persisted for decades, he was highly sensitive to shifts in opinion, especially when they emerged from columnists at the Post or the Times.
The criticism of the withdrawal caused him to justify the chaos as the inevitable consequence of a difficult decision, even though he had never publicly, or privately, predicted it. Through the whole last decade of the Afghan War, he had detested the conventional wisdom of the foreign-policy elites. They were willing to stay forever, no matter the cost. After defying their delusional promises of progress for so long, he wasn’t going to back down now. In fact, everything he’d witnessed from his seat in the Situation Room confirmed his belief that exiting a war without hope was the best and only course.
So much of the commentary felt overheated to him. He said to an aide: Either the press is losing its mind, or I am.
August 26 Every intelligence official watching Kabul was obsessed with the possibility of an attack by ISIS-Khorasan, or ISIS‑K, the Afghan offshoot of the Islamic State, which dreamed of a new caliphate in Central Asia. As the Taliban stormed across Afghanistan, they unlocked a prison at Bagram Air Base, freeing hardened ISIS‑K adherents.
ISIS‑K had been founded by veterans of the Pakistani and Afghan Taliban who had broken with their groups, on the grounds that they needed to be replaced by an even more militant vanguard. The intelligence community had been sorting through a roaring river of unmistakable warnings about an imminent assault on the airport.
As the national-security team entered the Situation Room for a morning meeting, it consumed an early, sketchy report of an explosion at one of the gates to HKIA, but it was hard to know if there were any U.S. casualties. Everyone wanted to believe that the United States had escaped unscathed, but everyone had too much experience to believe that. General McKenzie appeared via videoconference in the Situation Room with updates that confirmed the room’s suspicions of American deaths. Biden hung his head and quietly absorbed the reports. In the end, the explosion killed 13 U.S. service members and more than 150 Afghan civilians.
August 29–30 The remains of the dead service members were flown to Dover Air Force Base, in Delaware, for a ritual known as the dignified transfer: Flag-draped caskets are marched down the gangway of a transport plane and driven to the base’s mortuary.
So much about the withdrawal had slipped beyond Biden’s control. But grieving was his expertise. If there was one thing that everyone agreed Biden did more adroitly than any other public official, it was comforting survivors. The Irish journalist Fintan O’Toole once called him “the Designated Mourner.” Accompanied by his wife, Jill; Mark Milley; Antony Blinken; and Lloyd Austin, Biden made his way to a private room where grieving families had gathered. He knew he would be standing face to face with unbridled anger. A father had already turned his back on Austin and was angrily shouting at Milley, who held up his hands in the posture of surrender.
When Biden entered, he shook the hand of Mark Schmitz, who had lost his 20-year-old son, Jared. In his sorrow, Schmitz couldn’t decide whether he wanted to sit in the presence of the president. According to a report in The Washington Post , the night before, he had told a military officer that he didn’t want to speak to the man whose incompetence he blamed for his son’s death. In the morning, he changed his mind.
Schmitz told the Post that he couldn’t help but glare in Biden’s direction. When Biden approached, he held out a photo of Jared. “Don’t you ever forget that name. Don’t you ever forget that face. Don’t you ever forget the names of the other 12. And take some time to learn their stories.” “I do know their stories,” Biden replied.
After the dignified transfer, the families piled onto a bus. A sister of one of the dead screamed in Biden’s direction: “I hope you burn in hell.” Of all the moments in August, this was the one that caused the president to second-guess himself. He asked Press Secretary Jen Psaki: Did I do something wrong? Maybe I should have handled that differently.
As Biden left, Milley saw the pain on the president’s face. He told him: “You made a decision that had to be made. War is a brutal, vicious undertaking. We’re moving forward to the next step.” That afternoon, Biden returned to the Situation Room. There was pressure, from the Hill and talking heads, to push back the August 31 deadline. But everyone in the room was terrified by the intelligence assessments about ISIS‑K. If the U.S. stayed, it would be hard to avoid the arrival of more caskets at Dover.
As Biden discussed the evacuation, he received a note, which he passed to Milley. According to a White House official present in the room, the general read it aloud: “If you want to catch the 5:30 Mass, you have to leave now.” He turned to the president. “My mother always said it’s okay to miss Mass if you’re doing something important. And I would argue that this is important.” He paused, realizing that the president might need a moment after his bruising day. “This is probably also a time when we need prayers.” Biden gathered himself to leave. As he stood from his chair, he told the group, “I will be praying for all of you.” On the morning of the 30th , John Bass was cleaning out his office. An alarm sounded, and he rushed for cover. A rocket flew over the airport from the west and a second crashed into the compound, without inflicting damage.
Bass, ever the stoic, turned to a colleague. “Well, that’s about the only thing that hasn’t happened so far.” He was worried that the rockets weren’t a parting gift, but a prelude to an attack.
Earlier that morning, though, Bass had implored Major General Donahue to delay the departure. He’d asked his military colleagues to remain at the outer access points, because there were reports of American citizens still making their way to them.
Donahue was willing to give Bass a few extra hours. And around 3 a.m., 60 more American-passport holders arrived at the airport. Then, as if anticipating a final burst of American generosity toward refugees, the Taliban opened their checkpoints. A flood of Afghans rushed toward the airport. Bass sent consular officers to stand at the perimeter of concertina wire, next to the paratroopers, scanning for passports, visas, any official-looking document.
An officer caught a glimpse of an Afghan woman in her 20s waving a printout showing that she had received permission to enter the U.S. “Wow. You won the lottery twice,” he told her. “You’re the visa-lottery winner and you’ve made it here in time.” She was one of the final evacuees hustled into the airport.
Around 7 a.m., the last remaining State Department officials in Kabul, including Bass, posed for a photo and then walked up the ramp of a C-17. As Bass prepared for takeoff, he thought about two numbers. In total, the United States had evacuated about 124,000 people , which the White House touted as the most successful airlift in history. Bass also thought about the unknown number of Afghans he had failed to get out. He thought about the friends he couldn’t extricate. He thought about the last time he’d flown out of Kabul, 18 months earlier, and how he had harbored a sense of optimism for the country then. A hopefulness that now felt as remote as the Hindu Kush.
In a command center in the Pentagon’s basement, Lloyd Austin and Mark Milley followed events at the airport through a video feed provided by a drone, the footage filtered through the hazy shades of a night-vision lens. They watched in silence as Donahue, the last American soldier on the ground in Afghanistan, boarded the last C-17 to depart HKIA.
Five C‑17s sat on the runway—carrying “chalk,” as the military refers to the cargo of troops. An officer in the command center narrated the procession for them. “Chalk 1 loaded … Chalk 2 taxiing.” As the planes departed, there was no applause, no hand-shaking. A murmur returned to the room. Austin and Milley watched the great military project of their generation—a war that had cost the lives of comrades, that had taken them away from their families—end without remark. They stood without ceremony and returned to their offices.
Across the Potomac River, Biden sat with Jake Sullivan and Antony Blinken, revising a speech he would deliver the next day. One of Sullivan’s aides passed him a note, which he read to the group: “Chalk 1 in the air.” A few minutes later, the aide returned with an update. All of the planes were safely away.
Some critics had clamored for Biden to fire the advisers who had failed to plan for the chaos at HKIA, to make a sacrificial offering in the spirit of self-abasement. But Biden never deflected blame onto staff. In fact, he privately expressed gratitude to them. And with the last plane in the air, he wanted Blinken and Sullivan to join him in the private dining room next to the Oval Office as he called Austin to thank him. The secretary of defense hadn’t agreed with Biden’s withdrawal plan, but he’d implemented it in the spirit of a good soldier.
America’s longest war was now finally and officially over. Each man looked exhausted. Sullivan hadn’t slept for more than two hours a night over the course of the evacuation. Biden aides sensed that he hadn’t rested much better. Nobody needed to mention how the trauma and political scars might never go away, how the month of August had imperiled a presidency. Before returning to the Oval Office, they spent a moment together, lingering in the melancholy.
This article was adapted from Franklin Foer’s book The Last Politician: Inside Joe Biden’s White House and the Struggle for America’s Future.
It appears in the October 2023 print edition with the headline “The Final Days.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
" |
293 | 2,022 | "Inside the War Between Trump and His Generals | The New Yorker" | "https://www.newyorker.com/magazine/2022/08/15/inside-the-war-between-trump-and-his-generals" | "Newsletter To revisit this article, select My Account, then View saved stories Close Alert Search The Latest News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Puzzles & Games Video Podcasts Goings On Shop Open Navigation Menu Find anything you save across the site in your account Close Alert Letter from Washington Inside the War Between Trump and His Generals By Susan B. Glasser and Peter Baker As the President’s behavior grew increasingly erratic, General Mark Milley told his staff, “I will fight from the inside.” Photo illustration by Klawe Rzeczy; Source photographs from Getty; National Archives / Newsmakers Save this story Save this story Save this story Save this story In the summer of 2017, after just half a year in the White House, Donald Trump flew to Paris for Bastille Day celebrations thrown by Emmanuel Macron, the new French President. Macron staged a spectacular martial display to commemorate the hundredth anniversary of the American entrance into the First World War. Vintage tanks rolled down the Champs-Élysées as fighter jets roared overhead. The event seemed to be calculated to appeal to Trump—his sense of showmanship and grandiosity—and he was visibly delighted. The French general in charge of the parade turned to one of his American counterparts and said, “You are going to be doing this next year.” Sure enough, Trump returned to Washington determined to have his generals throw him the biggest, grandest military parade ever for the Fourth of July. The generals, to his bewilderment, reacted with disgust. “I’d rather swallow acid,” his Defense Secretary, James Mattis, said. Struggling to dissuade Trump, officials pointed out that the parade would cost millions of dollars and tear up the streets of the capital.
But the gulf between Trump and the generals was not really about money or practicalities, just as their endless policy battles were not only about clashing views on whether to withdraw from Afghanistan or how to combat the nuclear threat posed by North Korea and Iran. The divide was also a matter of values, of how they viewed the United States itself. That was never clearer than when Trump told his new chief of staff, John Kelly—like Mattis, a retired Marine Corps general—about his vision for Independence Day. “Look, I don’t want any wounded guys in the parade,” Trump said. “This doesn’t look good for me.” He explained with distaste that at the Bastille Day parade there had been several formations of injured veterans, including wheelchair-bound soldiers who had lost limbs in battle.
Kelly could not believe what he was hearing. “Those are the heroes,” he told Trump. “In our society, there’s only one group of people who are more heroic than they are—and they are buried over in Arlington.” Kelly did not mention that his own son Robert, a lieutenant killed in action in Afghanistan, was among the dead interred there.
“I don’t want them,” Trump repeated. “It doesn’t look good for me.” The subject came up again during an Oval Office briefing that included Trump, Kelly, and Paul Selva, an Air Force general and the vice-chairman of the Joint Chiefs of Staff. Kelly joked in his deadpan way about the parade. “Well, you know, General Selva is going to be in charge of organizing the Fourth of July parade,” he told the President. Trump did not understand that Kelly was being sarcastic. “So, what do you think of the parade?” Trump asked Selva. Instead of telling Trump what he wanted to hear, Selva was forthright.
“I didn’t grow up in the United States, I actually grew up in Portugal,” Selva said. “Portugal was a dictatorship—and parades were about showing the people who had the guns. And in this country, we don’t do that.” He added, “It’s not who we are.” Even after this impassioned speech, Trump still did not get it. “So, you don’t like the idea?” he said, incredulous.
“No,” Selva said. “It’s what dictators do.” The four years of the Trump Presidency were characterized by a fantastical degree of instability: fits of rage, late-night Twitter storms, abrupt dismissals. At first, Trump, who had dodged the draft by claiming to have bone spurs, seemed enamored with being Commander-in-Chief and with the national-security officials he’d either appointed or inherited. But Trump’s love affair with “my generals” was brief, and in a statement for this article the former President confirmed how much he had soured on them over time. “These were very untalented people and once I realized it, I did not rely on them, I relied on the real generals and admirals within the system,” he said.
It turned out that the generals had rules, standards, and expertise, not blind loyalty. The President’s loud complaint to John Kelly one day was typical: “You fucking generals, why can’t you be like the German generals?” “Which generals?” Kelly asked.
“The German generals in World War II,” Trump responded.
“You do know that they tried to kill Hitler three times and almost pulled it off?” Kelly said.
But, of course, Trump did not know that. “No, no, no, they were totally loyal to him,” the President replied. In his version of history, the generals of the Third Reich had been completely subservient to Hitler; this was the model he wanted for his military. Kelly told Trump that there were no such American generals, but the President was determined to test the proposition.
By late 2018, Trump wanted his own handpicked chairman of the Joint Chiefs of Staff. He had tired of Joseph Dunford, a Marine general who had been appointed chairman by Barack Obama, and who worked closely with Mattis as they resisted some of Trump’s more outlandish ideas. Never mind that Dunford still had most of a year to go in his term. For months, David Urban, a lobbyist who ran the winning 2016 Trump campaign in Pennsylvania, had been urging the President and his inner circle to replace Dunford with a more like-minded chairman, someone less aligned with Mattis, who had commanded both Dunford and Kelly in the Marines.
“Michael, your father and I are worried that you’re awfully young to be singing the blues.” Cartoon by David Sipress Copy link to cartoon Copy link to cartoon Link copied Shop Shop Mattis’s candidate to succeed Dunford was David Goldfein, an Air Force general and a former F-16 fighter pilot who had been shot down in the Balkans and successfully evaded capture. No one could remember a President selecting a chairman over the objections of his Defense Secretary, but word came back to the Pentagon that there was no way Trump would accept just one recommendation. Two obvious contenders from the Army, however, declined to be considered: General Curtis Scaparrotti, the NATO Supreme Allied Commander in Europe, told fellow-officers that there was “no gas left in my tank” to deal with being Trump’s chairman. General Joseph Votel, the Central Command chief, also begged off, telling a colleague he was not a good fit to work so closely with Mattis.
Urban, who had attended West Point with Trump’s Secretary of State, Mike Pompeo, and remained an Army man at heart, backed Mark Milley, the chief of staff of the Army. Milley, who was then sixty, was the son of a Navy corpsman who had served with the 4th Marine Division, in Iwo Jima. He grew up outside Boston and played hockey at Princeton. As an Army officer, Milley commanded troops in Afghanistan and Iraq, led the 10th Mountain Division, and oversaw the Army Forces Command. A student of history who often carried a pile of the latest books on the Second World War with him, Milley was decidedly not a member of the close-knit Marine fraternity that had dominated national-security policy for Trump’s first two years. Urban told the President that he would connect better with Milley, who was loquacious and blunt to the point of being rude, and who had the Ivy League pedigree that always impressed Trump.
Milley had already demonstrated those qualities in meetings with Trump as the Army chief of staff. “Milley would go right at why it’s important for the President to know this about the Army and why the Army is the service that wins all the nation’s wars. He had all those sort of elevator-speech punch lines,” a senior defense official recalled. “He would have that big bellowing voice and be right in his face with all the one-liners, and then he would take a breath and he would say, ‘Mr. President, our Army is here to serve you. Because you’re the Commander-in-Chief.’ It was a very different approach, and Trump liked that.” And, like Trump, Milley was not a subscriber to the legend of Mad Dog Mattis, whom he considered a “complete control freak.” Mattis, for his part, seemed to believe that Milley was inappropriately campaigning for the job, and Milley recalled to others that Mattis confronted him at a reception that fall, saying, “Hey, you shouldn’t run for office. You shouldn’t run to be the chairman.” Milley later told people that he had replied sharply to Mattis, “I’m not lobbying for any fucking thing. I don’t do that.” Milley eventually raised the issue with Dunford. “Hey, Mattis has got this in his head,” Milley told him. “I’m telling you it ain’t me.” Milley even claimed that he had begged Urban to cease promoting his candidacy.
In November, 2018, the day before Milley was scheduled for an interview with Trump, he and Mattis had another barbed encounter at the Pentagon. In Milley’s recounting of the episode later to others, Mattis urged him to tell Trump that he wanted to be the next Supreme Allied Commander in Europe, rather than the chairman of the Joint Chiefs. Milley said he would not do that but would instead wait to hear what the President wanted him to do. This would end whatever relationship the two generals had.
When Milley arrived at the White House the next day, he was received by Kelly, who seemed to him unusually distraught. Before they headed into the Oval Office to meet with Trump, Milley asked Kelly what he thought.
“You should go to Europe and just get the fuck out of D.C.,” Kelly said. The White House was a cesspool: “Just get as far away as you can.” In the Oval Office, Trump said right from the start that he was considering Milley for chairman of the Joint Chiefs. When Trump offered him the job, Milley replied, “Mr. President, I’ll do whatever you ask me to do.” For the next hour, they talked about the state of the world. Immediately, there were points of profound disagreement. On Afghanistan, Milley said he believed that a complete withdrawal of American troops, as Trump wanted, would cause a serious new set of problems. And Milley had already spoken out publicly against the banning of transgender troops, which Trump was insisting on.
“Mattis tells me you are weak on transgender,” Trump said.
“No, I am not weak on transgender,” Milley replied. “I just don’t care who sleeps with who.” There were other differences as well, but in the end Milley assured him, “Mr. President, you’re going to be making the decisions. All I can guarantee from me is I’m going to give you an honest answer, and I’m not going to talk about it on the front page of the Washington Post.
I’ll give you an honest answer on everything I can. And you’re going to make the decisions, and as long as they’re legal I’ll support it.” As long as they’re legal.
It was not clear how much that caveat even registered with Trump. The decision to name Milley was a rare chance, as Trump saw it, to get back at Mattis. Trump would confirm this years later, after falling out with both men, saying that he had picked Milley only because Mattis “could not stand him, had no respect for him, and would not recommend him.” Late on the evening of December 7th, Trump announced that he would reveal a big personnel decision having to do with the Joint Chiefs the next day, in Philadelphia, at the hundred-and-nineteenth annual Army-Navy football game. This was all the notice Dunford had that he was about to be publicly humiliated. The next morning, Dunford was standing with Milley at the game waiting for the President to arrive when Urban, the lobbyist, showed up. Urban hugged Milley. “We did it!” Urban said. “We did it!” But Milley’s appointment was not even the day’s biggest news. As Trump walked to his helicopter to fly to the game, he dropped another surprise. “John Kelly will be leaving toward the end of the year,” he told reporters. Kelly had lasted seventeen months in what he called “the worst fucking job in the world.” For Trump, the decision was a turning point. Instead of installing another strong-willed White House chief of staff who might have told him no, the President gravitated toward one who would basically go along with whatever he wanted. A week later, Kelly made an unsuccessful last-ditch effort to persuade Trump not to replace him with Mick Mulvaney, a former congressman from South Carolina who was serving as Trump’s budget director. “You don’t want to hire someone who’s going to be a yes-man,” Kelly told the President. “I don’t give a shit anymore,” Trump replied. “I want a yes-man!” A little more than a week after that, Mattis was out, too, having quit in protest over Trump’s order that the U.S. abruptly withdraw its forces from Syria right after Mattis had met with American allies fighting alongside the U.S. It was the first time in nearly four decades that a major Cabinet secretary had resigned over a national-security dispute with the President.
The so-called “axis of adults” was over. None of them had done nearly as much to restrain Trump as the President’s critics thought they should have. But all of them—Kelly, Mattis, Dunford, plus H. R. McMaster, the national-security adviser, and Rex Tillerson, Trump’s first Secretary of State—had served as guardrails in one way or another. Trump hoped to replace them with more malleable figures. As Mattis would put it, Trump was so out of his depth that he had decided to drain the pool.
On January 2, 2019, Kelly sent a farewell e-mail to the White House staff. He said that these were the people he would miss: “The selfless ones, who work for the American people so hard and never lowered themselves to wrestle in the mud with the pigs. The ones who stayed above the drama, put personal ambition and politics aside, and simply worked for our great country. The ones who were ethical, moral and always told their boss what he or she NEEDED to hear, as opposed to what they might have wanted to hear.” That same morning, Mulvaney showed up at the White House for his first official day as acting chief of staff. He called an all-hands meeting and made an announcement: O.K., we’re going to do things differently. John Kelly’s gone, and we’re going to let the President be the President.
In the fall of 2019, nearly a year after Trump named him the next chairman of the Joint Chiefs, Milley finally took over the position from Dunford. Two weeks into the job, Milley sat at Trump’s side in a meeting at the White House with congressional leaders to discuss a brewing crisis in the Middle East. Trump had again ordered the withdrawal of U.S. forces from Syria, imperilling America’s Kurdish allies and effectively handing control of the territory over to the Syrian government and Russian military forces. The House—amid impeachment proceedings against the President for holding up nearly four hundred million dollars in security assistance to Ukraine as leverage to demand an investigation of his Democratic opponent—passed a nonbinding resolution rebuking Trump for the pullout. Even two-thirds of the House Republicans voted for it.
At the meeting, the Speaker of the House, Nancy Pelosi, pointed out the vote against the President. “Congratulations,” Trump snapped sarcastically. He grew even angrier when the Senate Democratic leader, Chuck Schumer, read out a warning from Mattis that leaving Syria could result in the resurgence of the Islamic State. In response, Trump derided his former Defense Secretary as “the world’s most overrated general. You know why I fired him? I fired him because he wasn’t tough enough.” Eventually, Pelosi, in her frustration, stood and pointed at the President. “All roads with you lead to Putin,” she said. “You gave Russia Ukraine and Syria.” “You’re just a politician, a third-rate politician!” Trump shot back.
Finally, Steny Hoyer, the House Majority Leader and Pelosi’s No. 2, had had enough. “This is not useful,” he said, and stood up to leave with the Speaker.
“We’ll see you at the polls,” Trump shouted as they walked out.
When she exited the White House, Pelosi told reporters that she left because Trump was having a “meltdown.” A few hours later, Trump tweeted a White House photograph of Pelosi standing over him, apparently thinking it would prove that she was the one having a meltdown. Instead, the image went viral as an example of Pelosi confronting Trump.
Milley could also be seen in the photograph, his hands clenched together, his head bowed low, looking as though he wanted to sink into the floor. To Pelosi, this was a sign of inexplicable weakness, and she would later say that she never understood why Milley had not been willing to stand up to Trump at that meeting. After all, she would point out, he was the nonpartisan leader of the military, not one of Trump’s toadies. “Milley, you would have thought, would have had more independence,” she told us, “but he just had his head down.” In fact, Milley was already quite wary of Trump. That night, he called Representative Adam Smith, a Washington Democrat and the chairman of the House Armed Services Committee, who had also been present. “Is that the way these things normally go?” Milley asked. As Smith later put it, “That was the moment when Milley realized that the boss might have a screw or two loose.” There had been no honeymoon. “From pretty much his first day on the job as chairman of the Joint Chiefs,” Smith said, “he was very much aware of the fact that there was a challenge here that was not your normal challenge with a Commander-in-Chief.” Early on the evening of June 1, 2020, Milley failed what he came to realize was the biggest test of his career: a short walk from the White House across Lafayette Square, minutes after it had been violently cleared of Black Lives Matter protesters. Dressed in combat fatigues, Milley marched behind Trump with a phalanx of the President’s advisers in a photo op, the most infamous of the Trump Presidency, that was meant to project a forceful response to the protests that had raged outside the White House and across the country since the killing, the week before, of George Floyd. Most of the demonstrations had been peaceful, but there were also eruptions of looting, street violence, and arson, including a small fire in St. John’s Church, across from the White House.
Cartoon by Roz Chast Copy link to cartoon Copy link to cartoon Link copied Shop Shop In the morning before the Lafayette Square photo op, Trump had clashed with Milley, Attorney General William Barr, and the Defense Secretary, Mark Esper, over his demands for a militarized show of force. “We look weak,” Trump told them. The President wanted to invoke the Insurrection Act of 1807 and use active-duty military to quell the protests. He wanted ten thousand troops in the streets and the 82nd Airborne called up. He demanded that Milley take personal charge. When Milley and the others resisted and said that the National Guard would be sufficient, Trump shouted, “You are all losers! You are all fucking losers!” Turning to Milley, Trump said, “Can’t you just shoot them? Just shoot them in the legs or something?” Eventually, Trump was persuaded not to send in the military against American citizens. Barr, as the civilian head of law enforcement, was given the lead role in the protest response, and the National Guard was deployed to assist police. Hours later, Milley, Esper, and other officials were abruptly summoned back to the White House and sent marching across Lafayette Square. As they walked, with the scent of tear gas still in the air, Milley realized that he should not be there and made his exit, quietly peeling off to his waiting black Chevy Suburban. But the damage was done. No one would care or even remember that he was not present when Trump held up a Bible in front of the damaged church; people had already seen him striding with the President on live television in his battle dress, an image that seemed to signal that the United States under Trump was, finally, a nation at war with itself. Milley knew this was a misjudgment that would haunt him forever, a “road-to-Damascus moment,” as he would later put it. What would he do about it? In the days after the Lafayette Square incident, Milley sat in his office at the Pentagon, writing and rewriting drafts of a letter of resignation. There were short versions of the letter; there were long versions. His preferred version was the one that read in its entirety: I regret to inform you that I intend to resign as your Chairman of the Joint Chiefs of Staff. Thank you for the honor of appointing me as senior ranking officer. The events of the last couple weeks have caused me to do deep soul-searching, and I can no longer faithfully support and execute your orders as Chairman of the Joint Chiefs of Staff. It is my belief that you were doing great and irreparable harm to my country. I believe that you have made a concerted effort over time to politicize the United States military. I thought that I could change that. I’ve come to the realization that I cannot, and I need to step aside and let someone else try to do that.
Second, you are using the military to create fear in the minds of the people—and we are trying to protect the American people. I cannot stand idly by and participate in that attack, verbally or otherwise, on the American people. The American people trust their military and they trust us to protect them against all enemies, foreign and domestic, and our military will do just that. We will not turn our back on the American people.
Third, I swore an oath to the Constitution of the United States and embodied within that Constitution is the idea that says that all men and women are created equal. All men and women are created equal, no matter who you are, whether you are white or Black, Asian, Indian, no matter the color of your skin, no matter if you’re gay, straight or something in between. It doesn’t matter if you’re Catholic, Protestant, Muslim, Jew, or choose not to believe. None of that matters. It doesn’t matter what country you came from, what your last name is—what matters is we’re Americans. We’re all Americans. That under these colors of red, white, and blue—the colors that my parents fought for in World War II—means something around the world. It’s obvious to me that you don’t think of those colors the same way I do. It’s obvious to me that you don’t hold those values dear and the cause that I serve.
And lastly it is my deeply held belief that you’re ruining the international order, and causing significant damage to our country overseas, that was fought for so hard by the Greatest Generation that they instituted in 1945. Between 1914 and 1945, 150 million people were slaughtered in the conduct of war. They were slaughtered because of tyrannies and dictatorships. That generation, like every generation, has fought against that, has fought against fascism, has fought against Nazism, has fought against extremism. It’s now obvious to me that you don’t understand that world order. You don’t understand what the war was all about. In fact, you subscribe to many of the principles that we fought against. And I cannot be a party to that. It is with deep regret that I hereby submit my letter of resignation.
The letter was dated June 8th, a full week after Lafayette Square, but Milley still was not sure if he should give it to Trump. He was sending up flares, seeking advice from a wide circle. He reached out to Dunford, and to mentors such as the retired Army general James Dubik, an expert on military ethics. He called political contacts as well, including members of Congress and former officials from the Bush and Obama Administrations. Most told him what Robert Gates, a former Secretary of Defense and C.I.A. chief, did: “Make them fire you. Don’t resign.” “My sense is Mark had a pretty accurate measure of the man pretty quickly,” Gates recalled later. “He would tell me over time, well before June 1st, some of the absolutely crazy notions that were put forward in the Oval Office, crazy ideas from the President, things about using or not using military force, the immediate withdrawal from Afghanistan, pulling out of South Korea. It just went on and on.” Milley was not the only senior official to seek Gates’s counsel. Several members of Trump’s national-security team had made the pilgrimage out to his home in Washington State during the previous two years. Gates would pour them a drink, grill them some salmon, and help them wrestle with the latest Trump conundrum. “The problem with resignation is you can only fire that gun once,” he told them. All the conversations were variations on a theme: “ ‘How do I walk us back from the ledge?’ ‘How do I keep this from happening, because it would be a terrible thing for the country?’ ” After Lafayette Square, Gates told both Milley and Esper that, given Trump’s increasingly erratic and dangerous behavior, they needed to stay in the Pentagon as long as they could. “If you resign, it’s a one-day story,” Gates told them. “If you’re fired, it makes it clear you were standing up for the right thing.” Gates advised Milley that he had another important card and urged him to play it: “Keep the chiefs on board with you and make it clear to the White House that if you go they all go, so that the White House knows this isn’t just about firing Mark Milley. This is about the entire Joint Chiefs of Staff quitting in response.” Publicly, Lafayette Square looked like a debacle for Milley. Several retired generals had condemned his participation, pointing out that the leader of a racially diverse military, with more than two hundred thousand active-duty Black troops, could not be seen opposing a movement for racial justice. Even Mattis, who had refrained from openly criticizing Trump, issued a statement about the “bizarre photo op.” The Washington Post reported that Mattis had been motivated to do so by his anger at the image of Milley parading through the square in his fatigues.
Whatever their personal differences, Mattis and Milley both knew that there was a tragic inevitability to the moment. Throughout his Presidency, Trump had sought to redefine the role of the military in American public life. In his 2016 campaign, he had spoken out in support of the use of torture and other practices that the military considered war crimes. Just before the 2018 midterms, he ordered thousands of troops to the southern border to combat a fake “invasion” by a caravan of migrants. In 2019, in a move that undermined military justice and the chain of command, he gave clemency to a Navy SEAL found guilty of posing with the dead body of a captive in Iraq.
Many considered Trump’s 2018 decision to use the military in his preëlection border stunt to be “the predicate—or the harbinger—of 2020,” in the words of Peter Feaver, a Duke University expert on civil-military relations, who taught the subject to generals at command school. When Milley, who had been among Feaver’s students, called for advice after Lafayette Square, Feaver agreed that Milley should apologize but encouraged him not to resign. “It would have been a mistake,” Feaver said. “We have no tradition of resignation in protest amongst the military.” Milley decided to apologize in a commencement address at the National Defense University that he was scheduled to deliver the week after the photo op. Feaver’s counsel was to own up to the error and make it clear that the mistake was his and not Trump’s. Presidents, after all, “are allowed to do political stunts,” Feaver said. “That’s part of being President.” Milley’s apology was unequivocal. “I should not have been there,” he said in the address. He did not mention Trump. “My presence in that moment, and in that environment, created a perception of the military involved in domestic politics.” It was, he added, “a mistake that I have learned from.” At the same time, Milley had finally come to a decision. He would not quit. “Fuck that shit,” he told his staff. “I’ll just fight him.” The challenge, as he saw it, was to stop Trump from doing any more damage, while also acting in a way that was consistent with his obligation to carry out the orders of his Commander-in-Chief. Yet the Constitution offered no practical guide for a general faced with a rogue President. Never before since the position had been created, in 1949—or at least since Richard Nixon’s final days, in 1974—had a chairman of the Joint Chiefs encountered such a situation. “If they want to court-martial me, or put me in prison, have at it,” Milley told his staff. “But I will fight from the inside.” Milley’s apology tour was private as well as public. With the upcoming election fuelling Trump’s sense of frenetic urgency, the chairman sought to get the message to Democrats that he would not go along with any further efforts by the President to deploy the machinery of war for domestic political ends. He called both Pelosi and Schumer. “After the Lafayette Square episode, Milley was extremely contrite and communicated to any number of people that he had no intention of playing Trump’s game any longer,” Bob Bauer, the former Obama White House counsel, who was then advising Joe Biden’s campaign and heard about the calls, said. “He was really burned by that experience. He was appalled. He apologized for it, and it was pretty clear he was digging his heels in.” On Capitol Hill, however, some Democrats, including Pelosi, remained skeptical. To them, Lafayette Square proved that Milley had been a Trumpist all along. “There was a huge misunderstanding about Milley,” Adam Smith, the House Armed Services Committee chairman, recalled. “A lot of my Democratic colleagues after June 1st in particular were concerned about him.” Smith tried to assure other Democrats that “there was never a single solitary moment where it was possible that Milley was going to help Trump do anything that shouldn’t be done.” And yet Pelosi, among others, also distrusted Milley because of an incident earlier that year in which Trump ordered the killing of the Iranian commander Qassem Suleimani without briefing congressional leaders in advance. Smith said Pelosi believed that the chairman had been “evasive” and disrespectful to Congress. Milley, for his part, felt he could not disregard Trump’s insistence that lawmakers not be notified—a breach that was due to the President’s pique over the impeachment proceedings against him. “The navigation of Trumpworld was more difficult for Milley than Nancy gives him credit for,” Smith said. He vouched for the chairman but never managed to convince Pelosi.
How long could this standoff between the Pentagon and the President go on? For the next few months, Milley woke up each morning not knowing whether he would be fired before the day was over. His wife told him she was shocked that he had not been cashiered outright when he made his apology.
Esper was also on notice. Two days after Lafayette Square, the Defense Secretary had gone to the Pentagon pressroom and offered his own apology, even revealing his opposition to Trump’s demands to invoke the Insurrection Act and use the active-duty military. Such a step, Esper said, should be reserved only for “the most urgent and dire of situations.” Trump later exploded at Esper in the Oval Office about the criticism, delivering what Milley would recall as “the worst reaming out” he had ever heard.
The next day, Trump’s latest chief of staff, Mark Meadows, called the Defense Secretary at home—three times—to get him to recant his opposition to invoking the Insurrection Act. When he refused, Meadows took “the Tony Soprano approach,” as Esper later put it, and began threatening him, before eventually backing off. (A spokesperson for Meadows disputed Esper’s account.) Esper resolved to stay in office as long as he could, “to endure all the shit and run the clock out,” as he put it. He felt that he had a particular responsibility to hold on. By law, the only person authorized to deploy troops other than the President is the Secretary of Defense. Esper was determined not to hand that power off to satraps such as Robert O’Brien, who had become Trump’s fourth and final national-security adviser, or Ric Grenell, a former public-relations man who had been serving as acting director of National Intelligence.
Both Esper and Milley found new purpose in waiting out the President. They resisted him throughout the summer, as Trump repeatedly demanded that active-duty troops quash ongoing protests, threatened to invoke the Insurrection Act, and tried to stop the military from renaming bases honoring Confederate generals. “They both expected, literally on a daily basis, to be fired,” Gates recalled. Milley “would call me and essentially say, ‘I may not last until tomorrow night.’ And he was comfortable with that. He felt like he knew he was going to support the Constitution, and there were no two ways about it.” Milley put away the resignation letter in his desk and drew up a plan, a guide for how to get through the next few months. He settled on four goals: First, make sure Trump did not start an unnecessary war overseas. Second, make sure the military was not used in the streets against the American people for the purpose of keeping Trump in power. Third, maintain the military’s integrity. And, fourth, maintain his own integrity. In the months to come, Milley would refer back to the plan more times than he could count.
Even in June, Milley understood that it was not just a matter of holding off Trump until after the Presidential election, on November 3rd. He knew that Election Day might well mark merely the beginning, not the end, of the challenges Trump would pose. The portents were worrisome. Barely one week before Lafayette Square, Trump had posted a tweet that would soon become a refrain. The 2020 Presidential race, he warned for the first time, would end up as “the greatest Rigged Election in history.” By the evening of Monday, November 9th, Milley’s fears about a volatile post-election period unlike anything America had seen before seemed to be coming true. News organizations had called the election for Biden, but Trump refused to acknowledge that he had lost by millions of votes. The peaceful transition of power—a cornerstone of liberal democracy—was now in doubt. Sitting at home that night at around nine, the chairman received an urgent phone call from the Secretary of State, Mike Pompeo. With the possible exception of Vice-President Mike Pence, no one had been more slavishly loyal in public, or more privately obsequious, to Trump than Pompeo. But even he could not take it anymore.
“We’ve got to talk,” Pompeo told Milley, who was at home in Quarters Six, the red brick house that has been the official residence of chairmen of the Joint Chiefs since the early nineteen-sixties. “Can I come over?” Milley invited Pompeo to visit immediately.
“The crazies have taken over,” Pompeo told him when they sat down at Milley’s kitchen table. Not only was Trump surrounded by the crazies; they were, in fact, ascendant in the White House and, as of that afternoon, inside the Pentagon itself. Just a few hours earlier, on the first workday after the election was called for Biden, Trump had finally fired Esper. Milley and Pompeo were alarmed that the Defense Secretary was being replaced by Christopher Miller, until recently an obscure mid-level counterterrorism official at Trump’s National Security Council, who had arrived at the Pentagon flanked by a team of what appeared to be Trump’s political minders.
For Milley, this was an ominous development. From the beginning, he understood that “if the idea was to seize power,” as he told his staff, “you are not going to do this without the military.” Milley had studied the history of coups. They invariably required the takeover of what he referred to as the “power ministries”—the military, the national police, and the interior forces.
As soon as he’d heard about Esper’s ouster, Milley had rushed upstairs to the Secretary’s office. “This is complete bullshit,” he told Esper. Milley said that he would resign in protest. “You can’t,” Esper insisted. “You’re the only one left.” Once he cooled off, Milley agreed.
In the coming weeks, Milley would repeatedly convene the Joint Chiefs, to bolster their resolve to resist any dangerous political schemes from the White House now that Esper was out. He quoted Benjamin Franklin to them on the virtues of hanging together rather than hanging separately. He told his staff that, if need be, he and all the chiefs were prepared to “put on their uniforms and go across the river together”—to threaten to quit en masse—to prevent Trump from trying to use the military to stay in power illegally.
Soon after Miller arrived at the Pentagon, Milley met with him. “First things first here,” he told the new acting Defense Secretary, who had spent the previous few months running the National Counterterrorism Center. “You are one of two people in the United States now with the capability to launch nuclear weapons.” A Pentagon official who had worked closely with Miller had heard a rumor about him potentially replacing Esper more than a week before the election. “My first instinct was this is the most preposterous thing I’ve ever heard,” the official recalled. But then he remembered how Miller had changed in the Trump White House. “He’s inclined to be a bit of a sail, and as the wind blows he will flap in that direction,” the official said. “He’s not an ideologue. He’s just a guy willing to do their bidding.” By coincidence, the official happened to be walking into the Pentagon just as Miller was entering—a video of Miller tripping on the stairs soon made the rounds. Accompanying him were three men who would, for a few weeks, at least, have immense influence over the most powerful military in the world: Kash Patel, Miller’s new chief of staff; Ezra Cohen, who would ascend to acting Under-Secretary of Defense for Intelligence and Security; and Anthony Tata, a retired general and a talking head on Fox News, who would become the Pentagon’s acting head of policy.
“Something tells me you’re the kind of guy who always wears his baseball cap backward.” Cartoon by William Haefeli Copy link to cartoon Copy link to cartoon Link copied Shop Shop It was an extraordinary trio. Tata’s claims to fame were calling Obama a “terrorist leader”—an assertion he later retracted—and alleging that a former C.I.A. director had threatened to assassinate Trump. Patel, a former aide to Devin Nunes, the top Republican on the House Intelligence Committee, had been accused of spreading conspiracy theories claiming that Ukraine, not Russia, had interfered in the 2016 election. Both Trump’s third national-security adviser, John Bolton, and Bolton’s deputy, Charles Kupperman, had vociferously objected to putting Patel on the National Security Council staff, backing down only when told that it was a personal, “must-hire” order from the President. Still, Patel found his way around them to deal with Trump directly, feeding him packets of information on Ukraine, which was outside his portfolio, according to testimony during Trump’s first impeachment. (In a statement for this article, Patel called the allegations a “total fabrication.”) Eventually, Patel was sent to help Ric Grenell carry out a White House-ordered purge of the intelligence community.
Cohen, who had worked earlier in his career at the Defense Intelligence Agency under Michael Flynn, had initially been hired at the Trump National Security Council in 2017 but was pushed out after Flynn’s swift implosion as Trump’s first national-security adviser. When efforts were later made to rehire Cohen in the White House, Bolton’s deputy vowed to “put my badge on the table” and quit. “I am not going to hire somebody that is going to be another cancer in the organization, and Ezra is cancer,” Kupperman bluntly told Trump. In the spring of 2020, Cohen landed at the Pentagon, and following Trump’s post-election shakeup he assumed the top intelligence post at the Pentagon.
Milley had firsthand reason to be wary of these new Pentagon advisers. Just before the election, he and Pompeo were infuriated when a top-secret Navy SEAL Team 6 rescue mission to free an American hostage held in Nigeria nearly had to be cancelled at the last minute. The Nigerians had not formally approved the mission in advance, as required, despite Patel’s assurances. “Planes were already in the air and we didn’t have the approvals,” a senior State Department official recalled. The rescue team was kept circling while diplomats tried to track down their Nigerian counterparts. They managed to find them only minutes before the planes would have had to turn back. As a result, the official said, both Pompeo and Milley, who believed he had been personally lied to, “assigned ill will to that whole cabal.” The C.I.A. refused to have anything to do with Patel, Pompeo recalled to his State Department staff, and they should be cautious as well. “The Secretary thought these people were just wackadoodles, nuts, and dangerous,” a second senior State Department official said. (Patel denied their accounts, asserting, “I caused no delay at all.”) After Esper’s firing, Milley summoned Patel and Cohen separately to his office to deliver stern lectures. Whatever machinations they were up to, he told each of them, “life looks really shitty from behind bars. And, whether you want to realize it or not, there’s going to be a President at exactly 1200 hours on the twentieth and his name is Joe Biden. And, if you guys do anything that’s illegal, I don’t mind having you in prison.” Cohen denied that Milley said this to him, insisting it was a “very friendly, positive conversation.” Patel also denied it, asserting, “He worked for me, not the other way around.” But Milley told his staff that he warned both Cohen and Patel that they were being watched: “Don’t do it, don’t even try to do it. I can smell it. I can see it. And so can a lot of other people. And, by the way, the military will have no part of this shit.” Part of the new team’s agenda soon became clear: making sure Trump fulfilled his 2016 campaign promise to withdraw American troops from the “endless wars” overseas. Two days after Esper was fired, Patel slid a piece of paper across the desk to Milley during a meeting with him and Miller. It was an order, with Trump’s trademark signature in black Sharpie, decreeing that all four thousand five hundred remaining troops in Afghanistan be withdrawn by January 15th, and that a contingent of fewer than a thousand troops on a counterterrorism mission in Somalia be pulled out by December 31st.
Milley was stunned. “Where’d you get this?” he said.
Patel said that it had just come from the White House.
“Did you advise the President to do this?” he asked Patel, who said no.
“Did you advise the President to do this?” he asked Miller, who said no.
“Well, then, who advised the President to do it?” Milley asked. “By law, I’m the President’s adviser on military action. How does this happen without me rendering my military opinion and advice?” With that, he announced that he was putting on his dress uniform and going to the White House, where Milley and the others ended up in the office of the national-security adviser, Robert O’Brien.
“Where did this come from?” Milley demanded, putting the withdrawal order on O’Brien’s desk.
“I don’t know. I’ve never seen that before,” O’Brien said. “It doesn’t look like a White House memo.” Keith Kellogg, a retired general serving as Pence’s national-security adviser, asked to see the document. “This is not the President,” he said. “The format’s not right. This is not done right.” “Keith, you’ve got to be kidding me,” Milley said. “You’re telling me that someone’s forging the President of the United States’ signature?” The order, it turned out, was not fake. It was the work of a rogue operation inside Trump’s White House overseen by Johnny McEntee, Trump’s thirty-year-old personnel chief, and supported by the President himself. The order had been drafted by Douglas Macgregor, a retired colonel and a Trump favorite from his television appearances, working with a junior McEntee aide. The order was then brought to the President, bypassing the national-security apparatus and Trump’s own senior officials, to get him to sign it.
Macgregor often appeared on Fox News demanding an exit from Afghanistan and accused Trump’s advisers of blocking the President from doing what he wanted. “He needs to send everyone out of the Oval Office who keeps telling him, ‘If you do that and something bad happens, it’s going to be blamed on you, Mr. President,’ ” Macgregor had told Tucker Carlson in January. “He needs to say, ‘I don’t give a damn.’ ” On the day that Esper was fired, McEntee had invited Macgregor to his office, offered him a job as the new acting Defense Secretary’s senior adviser, and handed him a handwritten list of four priorities that, as Axios reported, McEntee claimed had come directly from Trump: 1. Get us out of Afghanistan.
2. Get us out of Iraq and Syria.
3. Complete the withdrawal from Germany.
4. Get us out of Africa.
Once the Afghanistan order was discovered, Trump’s advisers persuaded the President to back off, reminding him that he had already approved a plan for leaving over the following few months. “Why do we need a new plan?” Pompeo asked. Trump relented, and O’Brien then told the rest of the rattled national-security leadership that the order was “null and void.” The compromise, however, was a new order that codified the drawdown to twenty-five hundred troops in Afghanistan by mid-January, which Milley and Esper had been resisting, and a reduction in the remaining three thousand troops in Iraq as well. The State Department was given one hour to notify leaders of those countries before the order was released.
Two nightmare scenarios kept running through Milley’s mind. One was that Trump might spark an external crisis, such as a war with Iran, to divert attention or to create a pretext for a power grab at home. The other was that Trump would manufacture a domestic crisis to justify ordering the military into the streets to prevent the transfer of power. Milley feared that Trump’s “Hitler-like” embrace of his own lies about the election would lead him to seek a “Reichstag moment.” In 1933, Hitler had seized on a fire in the German parliament to take control of the country. Milley now envisioned a declaration of martial law or a Presidential invocation of the Insurrection Act, with Trumpian Brown Shirts fomenting violence.
By late November, amid Trump’s escalating attacks on the election, Milley and Pompeo’s coöperation had deepened—a fact that the Secretary of State revealed to Attorney General Bill Barr over dinner on the night of December 1st. Barr had just publicly broken with Trump, telling the Associated Press in an interview that there was no evidence of election fraud sufficient to overturn the results. As they ate at an Italian restaurant in a Virginia strip mall, Barr recounted for Pompeo what he called “an eventful day.” And Pompeo told Barr about the extraordinary arrangement he had proposed to Milley to make sure that the country was in steady hands until the Inauguration: they would hold daily morning phone calls with Mark Meadows. Pompeo and Milley soon took to calling them the “land the plane” phone calls.
“Our job is to land this plane safely and to do a peaceful transfer of power the twentieth of January,” Milley told his staff. “This is our obligation to this nation.” There was a problem, however. “Both engines are out, the landing gear are stuck. We’re in an emergency situation.” In public, Pompeo remained his staunchly pro-Trump self. The day after his secret visit to Milley’s house to commiserate about “the crazies” taking over, in fact, he refused to acknowledge Trump’s defeat, snidely telling reporters, “There will be a smooth transition—to a second Trump Administration.” Behind the scenes, however, Pompeo accepted that the election was over and made it clear that he would not help overturn the result. “He was totally against it,” a senior State Department official recalled. Pompeo cynically justified this jarring contrast between what he said in public and in private. “It was important for him to not get fired at the end, too, to be there to the bitter end,” the senior official said.
Both Milley and Pompeo were angered by the bumbling team of ideologues that Trump had sent to the Pentagon after the firing of Esper, a West Point classmate of Pompeo’s. The two, who were “already converging as fellow-travellers,” as one of the State officials put it, worked even more closely together as their alarm about Trump’s post-election conduct grew, although Milley was under no illusions about the Secretary of State. He believed that Pompeo, a longtime enabler of Trump who aspired to run for President himself, wanted “a second political life,” but that Trump’s final descent into denialism was the line that, at last, he would not cross. “At the end, he wouldn’t be a party to that craziness,” Milley told his staff. By early December, as they were holding their 8 A.M.
land-the-plane calls, Milley was confident that Pompeo was genuinely trying to achieve a peaceful handover of power to Biden. But he was never sure what to make of Meadows. Was the chief of staff trying to land the plane or to hijack it? Most days, Milley would also call the White House counsel, Pat Cipollone, who was hardly a usual interlocutor for a chairman of the Joint Chiefs. In the final weeks of the Administration, Cipollone, a true believer in Trump’s conservative agenda, was a principal actor in the near-daily drama over Trump’s various schemes to overturn his election defeat. After getting off one call with Cipollone, Milley told a visitor that the White House counsel was “constructive,” “not crazy,” and a force for “trying to keep guardrails around the President.” Milley continued to reach out to Democrats close to Biden to assure them that he would not allow the military to be misused to keep Trump in power. One regular contact was Susan Rice, the former Obama national-security adviser, dubbed by Democrats the Rice Channel. He also spoke several times with Senator Angus King, an Independent from Maine. “My conversations with him were about the danger of some attempt to use the military to declare martial law,” King said. He took it upon himself to reassure fellow-senators. “I can’t tell you why I know this,” but the military will absolutely do the right thing, he would tell them, citing Milley’s “character and honesty.” Milley had increasing reason to fear that such a choice might actually be forced upon him. In late November, Trump pardoned Michael Flynn, who had pleaded guilty to charges of lying to the F.B.I. about his contacts with Russia. Soon afterward, Flynn publicly suggested several extreme options for Trump: he could invoke martial law, appoint a special counsel, and authorize the military to “rerun” an election in the swing states. On December 18th, Trump hosted Flynn and a group of other election deniers in the Oval Office, where, for the first time in American history, a President would seriously entertain using the military to overturn an election. They brought with them a draft of a proposed Presidential order requiring the acting Defense Secretary—Christopher Miller—to “seize, collect, retain and analyze” voting machines and provide a final assessment of any findings in sixty days, well after the Inauguration was to take place. Later that night, Trump sent out a tweet beckoning his followers to descend on the capital to help him hold on to office. “Big protest in D.C. on January 6th,” he wrote at 1:42 a.m.
“Be there, will be wild!” Milley’s fears of a coup no longer seemed far-fetched.
While Trump was being lobbied by “the crazies” to order troops to intervene at home, Milley and his fellow-generals were concerned that he would authorize a strike against Iran. For much of his Presidency, Trump’s foreign-policy hawks had agitated for a showdown with Iran; they accelerated their efforts when they realized that Trump might lose the election. In early 2020, when Mike Pence advocated taking tough measures, Milley asked why. “Because they are evil,” Pence said. Milley recalled replying, “Mr. Vice-President, there’s a lot of evil in the world, but we don’t go to war against all of it.” Milley grew even more nervous before the election, when he heard a senior official tell Trump that if he lost he should strike Iran’s nuclear program. At the time, Milley told his staff that it was a “What the fuck are these guys talking about?” moment. Now it seemed frighteningly possible.
“Yep, a rare gray-suited office worker.” Cartoon by Julia Suits Copy link to cartoon Copy link to cartoon Link copied Shop Shop Robert O’Brien, the national-security adviser, had been another frequent cheerleader for tough measures: “Mr. President, we should hit ’em hard, hit ’em hard with everything we have.” Esper, in his memoir, called “hit them hard” O’Brien’s “tedious signature phrase.” (O’Brien disputed this, saying, “The quote attributed to me is not accurate.”) In the week of Esper’s firing, Milley was called to the White House to present various military options for attacking Iran and encountered a disturbing performance by Miller, the new acting Defense Secretary. Miller later told Jonathan Karl, of ABC, that he had intentionally acted like a “fucking madman” at the meeting, just three days into his tenure, pushing various escalatory scenarios for responding to Iran’s breakout nuclear capacities.
Miller’s behavior did not look intentional so much as unhelpful to Milley, as Trump kept asking for alternatives, including an attack inside Iran on its ballistic-weapons sites. Milley explained that this would be an illegal preëmptive act: “If you attack the mainland of Iran, you will be starting a war.” During another clash with Trump’s more militant advisers, when Trump was not present, Milley was even more explicit. “If we do what you’re saying,” he said, “we are all going to be tried as war criminals in The Hague.” Trump often seemed more bluster than bite, and the Pentagon brass still believed that he did not want an all-out war, yet he continued pushing for a missile strike on Iran even after that November meeting. If Trump said it once, Milley told his staff, he said it a thousand times. “The thing he was most worried about was Iran,” a senior Biden adviser who spoke with Milley recalled. “Milley had had the experience more than once of having to walk the President off the ledge when it came to retaliating.” The biggest fear was that Iran would provoke Trump, and, using an array of diplomatic and military channels, American officials warned the Iranians not to exploit the volatile domestic situation in the U.S. “There was a distinct concern that Iran would take advantage of this to strike at us in some way,” Adam Smith, the House Armed Services chairman, recalled.
Among those pushing the President to hit Iran before Biden’s Inauguration, Milley believed, was the Israeli Prime Minister, Benjamin Netanyahu. On December 18th, the same day that Trump met with Flynn to discuss instituting martial law, Milley met with Netanyahu at his home in Jerusalem to personally urge him to back off with Trump. “If you do this, you’re gonna have a fucking war,” Milley told him.
Two days later, on December 20th, Iranian-backed militias in Iraq fired nearly two dozen rockets at the American Embassy in Baghdad. Trump responded by publicly blaming Iran and threatening major retaliation if so much as a single American was killed. It was the largest attack on the Green Zone in more than a decade, and exactly the sort of provocation Milley had been dreading.
During the holidays, tensions with Iran escalated even more as the first anniversary of the American killing of Suleimani approached. Ayatollah Ali Khamenei warned that “those who ordered the murder of General Soleimani” would “be punished.” Late on the afternoon of Sunday, January 3rd, Trump met with Milley, Miller, and his other national-security advisers on Iran. Pompeo and Milley discussed a worrisome new report from the International Atomic Energy Agency. But, by the end, even Pompeo and O’Brien, the Iran hawks, opposed a military strike at this late hour in Trump’s Presidency. “He realized the clock ran out,” Milley told his staff. Trump, consumed with his election fight, backed off.
At the end of the meeting with his security chiefs, the President pulled Miller aside and asked him if he was ready for the upcoming January 6th protest. “It’s going to be a big deal,” Milley heard Trump tell Miller. “You’ve got enough people to make sure it’s safe for my people, right?” Miller assured him he did. This was the last time that Milley would ever see Trump.
On January 6th, Milley was in his office at the Pentagon meeting with Christine Wormuth, the lead Biden transition official for the Defense Department. In the weeks since the election, Milley had started displaying four networks at once on a large monitor across from the round table where he and Wormuth sat: CNN and Fox News, as well as the small pro-Trump outlets Newsmax and One America News Network, which had been airing election disinformation that even Fox would not broadcast. “You’ve got to know what the enemy is up to,” Milley had joked when Wormuth noticed his viewing habits at one of their meetings.
Milley and Wormuth that day were supposed to discuss the Pentagon’s plans to draw down U.S. troops in Afghanistan, as well as the Biden team’s hopes to mobilize large-scale Covid vaccination sites around the country. But, as they realized in horror what was transpiring on the screen in front of them, Milley was summoned to an urgent meeting with Miller and Ryan McCarthy, the Secretary of the Army. They had not landed the plane, after all. The plane was crashing.
Milley entered the Defense Secretary’s office at 2:30 p.m.
, and they discussed deploying the D.C. National Guard and mobilizing National Guard units from nearby states and federal agents under the umbrella of the Justice Department. Miller issued an order at 3:04 p.m.
to send in the D.C. Guard.
But it was too late to prevent the humiliation: Congress had been overwhelmed by a mob of election deniers, white-supremacist militia members, conspiracy theorists, and Trump loyalists. Milley worried that this truly was Trump’s “Reichstag moment,” the crisis that would allow the President to invoke martial law and maintain his grip on power.
From the secure facility at Fort McNair, where they had been brought by their protective details, congressional leaders called on the Pentagon to send forces to the Capitol immediately. Nancy Pelosi and Chuck Schumer were suspicious of Miller: Whose side was this unknown Trump appointee on? Milley tried to reassure the Democratic leadership that the uniformed military was on the case, and not there to do Trump’s bidding. The Guard, he told them, was coming.
It was already after three-thirty by then, however, and the congressional leaders were furious that it was taking so long. They also spoke with Mike Pence, who offered to call the Pentagon as well. He reached Miller around 4 p.m.
, with Milley still in his office listening in. “Clear the Capitol,” Pence ordered.
Although it was the Vice-President who was seeking to defend the Capitol, Meadows wanted to pretend that Trump was the one taking action. He called Milley, telling him, “We have to kill the narrative that the Vice-President is making all the decisions. We need to establish the narrative that the President is still in charge.” Milley later dismissed Meadows, whose spokesperson denied Milley’s account, as playing “politics, politics, politics.” The Guard finally arrived at the Capitol by 5:40 p.m.
, “sprint speed” for the military, as Milley would put it, but not nearly fast enough for some members of Congress, who would spend months investigating why it took so long. By 7 p.m.
, a perimeter had been set up outside the Capitol, and F.B.I. and A.T.F. agents were going door to door in the Capitol’s many hideaways and narrow corridors, searching for any remaining rioters.
That night, waiting for Congress to return and formally ratify Trump’s electoral defeat, Milley called one of his contacts on the Biden team. He explained that he had spoken with Meadows and Pat Cipollone at the White House, and that he had been on the phone with Pence and the congressional leaders as well. But Milley never heard from the Commander-in-Chief, on a day when the Capitol was overrun by a hostile force for the first time since the War of 1812. Trump, he said, was both “shameful” and “complicit.” Later, Milley would often think back to that awful day. “It was a very close-run thing,” the historically minded chairman would say, invoking the famous line of the Duke of Wellington after he had only narrowly defeated Napoleon at Waterloo. Trump and his men had failed in their execution of the plot, failed in part by failing to understand that Milley and the others had never been Trump’s generals and never would be. But their attack on the election had exposed a system with glaring weaknesses. “They shook the very Republic to the core,” Milley would eventually reflect. “Can you imagine what a group of people who are much more capable could have done?” ♦ This is drawn from “ The Divider: Trump in the White House, 2017-2021.
” An earlier version of this article mistakenly attributed a quote to Mark Esper’s book.
New Yorker Favorites First she scandalized Washington. Then she became a princess.
The unravelling of an expert on serial killers.
What exactly happened between Neanderthals and humans ? When you eat a dried fig, you’re probably chewing wasp mummies, too.
The meanings of the Muslim head scarf.
The slippery scams of the olive-oil industry.
Critics on the classics: our 1991 review of “Thelma & Louise.” Sign up for our daily newsletter to receive the best stories from The New Yorker.
News & Politics E-mail address Sign up By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
A Reporter at Large By Dan Kaufman Dispatch By Dexter Filkins The Political Scene By Sue Halpern Daily Shouts By Naomi Extra Sections News Books & Culture Fiction & Poetry Humor & Cartoons Magazine Crossword Video Podcasts Archive Goings On More Customer Care Shop The New Yorker Buy Covers and Cartoons Condé Nast Store Digital Access Newsletters Jigsaw Puzzle RSS About Careers Contact F.A.Q.
Media Kit Press Accessibility Help © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights.
The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
Ad Choices Facebook X Snapchat YouTube Instagram Do Not Sell My Personal Info
" |
294 | 2,023 | "The Atlantic Festival 2023: A Live and Virtual Event - The Atlantic" | "https://www.theatlantic.com/live/atlantic-festival-2023" | "The Atlantic Passes Experience Speakers Agenda Underwriters Contact WATCH NOW The Atlantic Festival 2023 The Atlantic Festival brings together influential and provocative political, cultural, business, tech, and climate leaders for two full days of in-depth interviews, timely forums, intimate breakout sessions, book talks, screenings, and networking opportunities.
Our writers and moderators will host a lively exchange of complex ideas, addressing the most significant issues of our time with today’s boldest thinkers as we bring The Atlantic ’s journalism to life onstage.
In-person and virtual passes are available now.
See agenda below.
Becoming a subscriber is the best way to experience The Atlantic Festival in person. Subscribers enjoy 30 percent off in-person passes, reserved seating, and other benefits.
Sign Up to Receive Updates Subject to The Atlantic's Privacy Policy and Terms and Conditions.
Passes Festival Pass Admission to all in-person sessions, including the Ideas Stage, screenings, book talks, happy hours, and more Professional Pass (Sold Out) Additional networking, Festival Lounge access, reserved seats at the Ideas Stage, merchandise, and more (limited availability) Virtual Pass Livestream the main-stage programming for free from anywhere in the world Group Sales Bring your colleagues, family, or cohort to The Atlantic Festival.
JOIN US FOR TWO DAYS OF CAN’T-MISS EVENTS Two full days with more than 10 events each day. Seventy-plus speakers, happy hours, thousands of attendees to meet and have meaningful connections and conversations with. An experience worth investing in that you won’t want to miss. The Festival Pass has you covered — $200.
Tickets are nonrefundable, and seating is first come, first served.
Speakers Hillary Rodham Clinton Former Secretary of State and United States Senator Will Hurd Former Representative, Texas Twitter Kerry Washington Actor, Director, Producer, Activist, and Author Jeffrey Goldberg Editor in Chief, The Atlantic Twitter Mira Murati Chief Technology Officer, OpenAI Twitter Jake Tapper Author, All the Demons Are Here ; Anchor and Chief Washington Correspondent, CNN Spike Lee Filmmaker Antony Blinken U.S. Secretary of State Twitter Chimamanda Ngozi Adichie Author Rohit Aggarwala Commissioner, Department of Environmental Protection and Chief Climate Officer, City of New York Tim Alberta Staff Writer, The Atlantic Twitter Jacqueline Alemany Congressional Investigations Reporter, The Washington Post Twitter Peter Ambler Executive Director and Co-Founder, Giffords Twitter Donnel Baird Founder and CEO, BlocPower Twitter Ross Andersen Staff Writer, The Atlantic Twitter Anne Applebaum Staff Writer, The Atlantic Representative Nanette Barragán (D-CA) Underwriter Session Laura Barrón-López White House Correspondent, PBS NewsHour Twitter Xavier Becerra Secretary, U.S. Department of Health and Human Services Twitter Julie Beck Senior Editor, The Atlantic Twitter Gal Beckerman Senior Editor, The Atlantic Twitter Omi Bell Founder and CEO, Black Girl Ventures Twitter Melissa Ben-Ishay CEO and Co-Founder, Baked by Melissa Twitter Richard Bonneau Vice President, Prescient Design, Genentech Underwriter Session Arthur C. Brooks Contributing Writer, The Atlantic Twitter Ronald Brownstein Senior Editor, The Atlantic Twitter Lonnie G. Bunch III Secretary of the Smithsonian Institution Yasmin Cader Deputy Legal Director, ACLU, and Director, Trone Center for Justice and Equality Twitter Gilbert Campbell Founder and CEO, Volt Energy Utility Twitter Representative Joaquin Castro (D-TX) Kyunghyun Cho Senior Director of Frontier Research, Prescient Design, Genentech Underwriter Session McKay Coppins Staff Writer, The Atlantic Nancy Cordes Chief White House Correspondent, CBS News Governor Spencer Cox Utah Chris Cummiskey Executive Vice President and Chief Commercial and Customer Solutions Officer, Southern Company Underwriter Session R.J. Cutler Director and Producer Shreya Dave Co-founder and CEO, Via Separations Twitter Jerusalem Demsas Staff Writer, The Atlantic Caitlin Dickerson Staff Writer, The Atlantic Twitter John Dickerson Anchor, CBS News Prime Time , and Chief Political Analyst, CBS News Andrea Ducas Vice President of Health Policy, Center for American Progress Twitter Jamie Ducharme Health Correspondent, Time Twitter Claudine Ebeid Executive Producer of Audio, The Atlantic Twitter Jonathan Elkind Senior Research Scholar, Columbia University’s Center on Global Energy Policy Leah Ellis CEO and Co-Founder, Sublime Systems Twitter Angel Ellis Director, Mvskoke Media Franklin Foer Staff Writer, The Atlantic Aneesa Folds Cast, freestyle+ Twitter Carla Frisch Acting Executive Director, Office of Policy, U.S. Department of Energy David Frum Staff Writer, The Atlantic Megan Garber Staff Writer, The Atlantic Gabby Giffords Former Congresswoman and Co-Founder, Giffords Twitter Celena Gill CEO, Frères Branchiaux Retail Enterprises Twitter Bria Gillum Senior Program Officer, Criminal Justice, MacArthur Foundation Underwriter Session Mary R. Grealy President, Healthcare Leadership Council Underwriter Session Lauren Groff Author, The Vaster Wilds Twitter Dennis Hancock Senior Vice President and Head of Digital Health, Medicines, and AI, Pfizer Underwriter Session Adam Harris Staff Writer, The Atlantic Twitter Hugh Herr Professor of Media Arts and Sciences, MIT Media Lab Jemele Hill Contributing Writer, The Atlantic Twitter Ivor Braden Horn Chief Health Equity Officer, Google Twitter Marisa Hughes Environmental Resilience Research Lead, Johns Hopkins Applied Physics Laboratory Raúl Ibañez Senior Vice President, On-Field Operations, MLB Mateo Jaramillo Co-Founder and CEO, Form Energy Underwriter Session Weijia Jiang Senior White House Correspondent, CBS News Alencia Johnson Founder, 1063 West Broad and Senior Adviser, Biden-Harris 2020 campaign Twitter Tamara Keith White House Correspondent, NPR Twitter Toby Kiers Executive Director, Society for the Protection of Underground Networks Twitter Karlie Kloss Supermodel, Entrepreneur, and Philanthropist Underwriter Session Rebecca Landsberry-Baker Director, Bad Press Sarah Laskow Senior Editor, The Atlantic Twitter Mark Leibovich Staff Writer, The Atlantic Twitter Helen Lewis Staff Writer, The Atlantic Twitter Shirley Li Staff Writer, The Atlantic Twitter Sarah Longwell Publisher, The Bulwark Twitter Annie Lowrey Staff Writer, The Atlantic Mike Madsen Strategic Adviser to the Director and former Deputy Director, Defense Innovation Unit, U.S. Department of Defense Aamir Malik Executive Vice President and Chief Business Innovation Officer, Pfizer Underwriter Session Michael Mann Director, Penn Center for Science, Sustainability and the Media and Author, Our Fragile Moment Twitter Debra Mathews Associate Director for Research and Programs, Johns Hopkins Berman Institute of Bioethics Alice McKown Publisher & CRO, The Atlantic Underwriter Session Karhlton F. Moore Director, Bureau of Justice Assistance, U.S. Department of Justice Russell Moore Editor in Chief, Christianity Today Peter Moskos Professor, John Jay College of Criminal Justice, and Former Baltimore City Police Officer Richard G. Newell President and CEO, Resources for the Future Twitter Vann R. Newkirk II Senior Editor, The Atlantic Jen O’Malley Dillon Deputy Chief of Staff, Biden-Harris 2020 Campaign Manager, Political Strategist Anita Otubu Senior Director, Universal Energy Facility (SEforALL) Christian Paige Emmy-Nominated Spoken-Word Poet, Professional Speaker, and Educator Underwriter Session Joe Peeler Director, Bad Press Nancy Pelosi Speaker Emerita of the U.S. House of Representatives Twitter Rach Pike Co-Founder, As You Are DC Twitter Elaina Plott Calabro Staff Writer, The Atlantic Twitter Asahi Pompey Global Head of Corporate Engagement, Goldman Sachs Underwriter Session Kimberly Powell Vice President of Healthcare, NVIDIA Underwriter Session Laurene Powell Jobs Founder and President, Emerson Collective Rena Priest Award-Winning Poet and Sixth Washington State Poet Laureate Underwriter Session Natalie Quillian Deputy Chief of Staff, Biden-Harris 2020 Deputy Campaign Manager, Political Strategist Anastacia-Renee Queer Writer, Educator, Interdisciplinary Artist, Speaker, and Podcaster Underwriter Session Emi Reyes CEO, Latino Economic Development Center Robert Rooks CEO, REFORM Alliance Rebecca Rosen Senior Editor, The Atlantic Hanna Rosin Senior Editor, The Atlantic , and Host, Radio Atlantic Twitter Christina Ruffini Correspondent, CBS News Melanie Rubin Cast, freestyle+ Twitter Sam Schaeffer CEO and Executive Director, Center for Employment Opportunities Governor Josh Shapiro Pennsylvania Hana S. Sharif Artistic Director, Arena Stage Aasma Shaukat Director, GI Outcomes Research, NYU Langone Health Twitter Sharyn Skeeter Editor, Poet and Author, Dancing With Langston and What’s Next? Short Fiction in Time of Change Underwriter Session Clint Smith Staff Writer, The Atlantic Twitter Evan Smith Contributor, The Atlantic , and Senior Adviser, Emerson Collective Robert Stone CEO, City of Hope, and Helen and Morgan Chu Chief Executive Officer Distinguished Chair Underwriter Session Cheryl Strayed Author Twitter Elliot Stultz Senior Vice President and Deputy General Counsel, Allstate Underwriter Session Mustafa Suleyman Co-founder and CEO, Inflection AI Underwriter Session Jake Sullivan U.S. National Security Adviser Morgan Sword Executive Vice President, Baseball Operations, MLB Amir Tarighat Co-Founder and CEO, Agency Bhumika Tharoor Managing Editor, The Atlantic Aswad Thomas Vice President, Alliance for Safety and Justice, and National Director for Crime Survivors for Safety and Justice Underwriter Session Nicholas Thompson CEO, The Atlantic Twitter Underwriter Session Andrea Valdez Managing Editor, The Atlantic Twitter Anthony Veneziale Cast, freestyle+ Twitter Cleo Wade Author and Poet, Remember Love Robert Waldinger Professor of Psychiatry, Harvard Medical School, and Author Evelyn N. Wang Director of ARPA-E, Department of Energy Twitter Anne White Executive Vice President and President of Lilly Neuroscience, Eli Lilly and Company Underwriter Session Maya Wiley President and CEO, The Leadership Conference on Civil and Human Rights Twitter Chris Womack President and CEO, Southern Company Underwriter Session Ramunda Lark Young Owner and Co-Founder, MahoganyBooks Twitter Ali Zaidi White House National Climate Adviser Ellen Zhong Assistant Professor of Computer Science, Princeton University Underwriter Session LIVE AT THE WHARF In partnership with Washington, D.C.’s reimagined waterfront neighborhood The Wharf—home of The Atlantic ’s new D.C. headquarters—and the many restaurants and cultural and community organizations in the area, we look forward to this can’t-miss opportunity to experience The Atlantic ’s journalism come to life.
See agenda below.
Pick up signed copies of books by participating authors from the official festival bookseller, Politics and Prose.
This year we are proud to partner with Martha’s Table , addressing food insecurity by helping to feed over 250 community members each day. Please join us in supporting these and other local organizations.
Agenda Sep. 28-29 Buy a pass and then register for the sessions you wish to attend. Please note that if a session is at capacity, we will have a stand-by line in person at the event.
Format: Day One Thu, Sep 28 Day Two Fri, Sep 29 Ideas Stage, featuring Nancy Pelosi, Kerry Washington, Antony Blinken, and more Virtual / In-Person Summary The Atlantic Festival, now in its 15th year, is the preeminent live exploration of The Atlantic ’s journalism. Join us on the Ideas Stage to unpack the most consequential issues shaping our changing nation. Speaker Emerita Nancy Pelosi will be in conversation with The Atlantic ’s editor in chief, Jeffrey Goldberg , to discuss the future of global and domestic democracy; the actor, producer, and activist Kerry Washington will join Clint Smith , an Atlantic staff writer, to discuss her deeply personal and highly anticipated new memoir, Thicker than Water ; Jeffrey Goldberg will also interview Secretary of State Antony Blinken about the future of American foreign policy; and many more coming soon.
Duration 2 hours 15 minutes Location Arena Stage Speakers Kerry Washington Actor, Director, Producer, Activist, and Author Jeffrey Goldberg Editor in Chief, The Atlantic Antony Blinken U.S. Secretary of State Nancy Pelosi Speaker Emerita of the U.S. House of Representatives Clint Smith Staff Writer, The Atlantic Tracks Business, Climate, Culture, Health, Politics, Progress + Tech, Race + Identity The Big Story: Preventing Gun Violence In-Person Summary Join former Congresswoman Gabby Giffords and Peter Ambler , the executive director of Giffords: Courage to Fight Gun Violence, for a timely conversation about the urgent need to address gun violence in America.
Duration 30 minutes Location Pier Stage Speakers Peter Ambler Executive Director and Co-Founder, Giffords Gabby Giffords Former Congresswoman and Co-Founder, Giffords Tracks Culture, Politics The Big Story: A Conversation With National Climate Adviser Ali Zaidi In-Person Summary The rising temperatures, biodiversity loss, energy strains, and air, water, and plastic pollution of the climate crisis are pushing us toward a global tipping point. White House National Climate Adviser Ali Zaidi joins Atlantic managing editor Bhumika Tharoor to explore strategies that modernize the power grid and rebuild our nation’s infrastructure, and the scalable new technologies and nature-based solutions being implemented by this administration.
Duration 30 minutes Location Dockmaster Speakers Bhumika Tharoor Managing Editor, The Atlantic Ali Zaidi White House National Climate Adviser Tracks Climate Leaps by Bayer Presents: Mind to Machine* Underwritten by Leaps by Bayer In-Person Summary This session is produced by our underwriter and is independent of the Atlantic ’s editorial staff.
Today, one in five adults in the U.S. struggles with their mental health, and nearly half of the country’s high-school students experience persistent feelings of hopelessness. The advent of AI-based diagnostic tools and chatbots could result in many more people accessing crucial therapeutic services. At the same time, humans are wired for connection—centering that fact is the only way the technology will be effective when it comes to matters of the mind. In a conversation facilitated by Leaps by Bayer, Nick Thompson sits down with two prominent voices in the tech space, Karlie Kloss and Mustafa Suleyman , to explore the promises and perils of entrusting our psychological health and well-being to machines.
Duration 45 minutes Location Pearl Street Warehouse Underwriters Speakers Karlie Kloss * Supermodel, Entrepreneur, and Philanthropist Mustafa Suleyman * Co-founder and CEO, Inflection AI Nicholas Thompson * CEO, The Atlantic Tracks Health, Progress + Tech * This session is produced by our underwriter and is independent of the Atlantic’s editorial staff.
The Big Story: The Future of American Conservatism In-Person Summary The spectrum of the political right is broad—social conservatives, libertarians, MAGA supporters, populists, and RINOs—and the issues prioritized by each group vary largely. Led by senior editor Rebecca Rosen , Atlantic journalists Helen Lewis , David Frum , and Tom Nichols explore the evolution of our political parties and the future of American conservatism. What defines the Republican Party today? And what are the animating forces galvanizing conservative voters in 2024? Duration 30 minutes Location Dockmaster Speakers Anne Applebaum Staff Writer, The Atlantic David Frum Staff Writer, The Atlantic Helen Lewis Staff Writer, The Atlantic Rebecca Rosen Senior Editor, The Atlantic Tracks Politics Ideas Stage, featuring Joaquin Castro, Chimamanda Ngozi Adichie, Hakeem Jeffries, and more Virtual / In-Person Summary The Atlantic Festival, now in its 15th year, is the preeminent live exploration of The Atlantic ’s journalism. Ideas Stage interviews tackle the most consequential issues shaping our changing nation. This session includes Founder and President of Emerson Collective Laurene Powell in conversation with House Democratic Leader Hakeem Jeffries.
Representative Joaquin Castro discusses the urgent need to address border security and immigration with Pulitzer Prize–winning Atlantic staff writer Caitlin Dickerson.
Pennsylvania Governor Josh Shapiro talks about governing during this transformational time with CBS News’s John Dickerson.
Writer Chimamanda Ngozi Adichie has a powerful conversation about freedom of expression with Atlantic senior editor Gal Beckerman.
Duration 2 hours 15 minutes Location Arena Stage Speakers Chimamanda Ngozi Adichie Author Gal Beckerman Senior Editor, The Atlantic Representative Joaquin Castro (D-TX) Caitlin Dickerson Staff Writer, The Atlantic John Dickerson Anchor, CBS News Prime Time , and Chief Political Analyst, CBS News Laurene Powell Jobs Founder and President, Emerson Collective Governor Josh Shapiro Pennsylvania Tracks Business, Climate, Culture, Health, Politics, Progress + Tech, Race + Identity The Climate Summit: Building a More Sustainable Future Underwritten by Allstate In-Person Summary The stakes couldn’t be higher. Finding ways to mitigate and adapt to climate change is the opportunity and challenge of the moment.
The Atlantic will assemble policy makers, climate innovators, scientists, and business leaders to address today’s most urgent climate challenges and offer solutions for a more resilient future.
Duration 1 hour 30 minutes Location Dockmaster Underwriters Speakers Rohit Aggarwala Commissioner, Department of Environmental Protection and Chief Climate Officer, City of New York Gilbert Campbell Founder and CEO, Volt Energy Utility Nancy Cordes Chief White House Correspondent, CBS News Claudine Ebeid Executive Producer of Audio, The Atlantic Carla Frisch Acting Executive Director, Office of Policy, U.S. Department of Energy Marisa Hughes Environmental Resilience Research Lead, Johns Hopkins Applied Physics Laboratory Toby Kiers Executive Director, Society for the Protection of Underground Networks Sarah Laskow Senior Editor, The Atlantic Michael Mann Director, Penn Center for Science, Sustainability and the Media and Author, Our Fragile Moment Elliot Stultz * Senior Vice President and Deputy General Counsel, Allstate Tracks Climate, Progress + Tech * This session is produced by our underwriter and is independent of the Atlantic’s editorial staff.
Women of Washington Underwritten by Eli Lilly In-Person Summary Women in leadership roles in Washington, D.C., face unique challenges and exciting opportunities to make a difference. In this installment of Women of Washington, The Atlantic brings together women in the administration, the halls of Congress, and the media who are reshaping the D.C. landscape while making their mark on the future of our nation.
Duration 1 hour 30 minutes Location Pearl Street Warehouse Underwriters Speakers Jacqueline Alemany Congressional Investigations Reporter, The Washington Post Representative Nanette Barragán * (D-CA) Laura Barrón-López White House Correspondent, PBS NewsHour Weijia Jiang Senior White House Correspondent, CBS News Alencia Johnson Founder, 1063 West Broad and Senior Adviser, Biden-Harris 2020 campaign Tamara Keith White House Correspondent, NPR Sarah Longwell Publisher, The Bulwark Jen O’Malley Dillon Deputy Chief of Staff, Biden-Harris 2020 Campaign Manager, Political Strategist Elaina Plott Calabro Staff Writer, The Atlantic Natalie Quillian Deputy Chief of Staff, Biden-Harris 2020 Deputy Campaign Manager, Political Strategist Anne White * Executive Vice President and President of Lilly Neuroscience, Eli Lilly and Company Tracks Culture, Politics * This session is produced by our underwriter and is independent of the Atlantic’s editorial staff.
Visit Seattle Presents: Strength and Poetry—Elevating Diverse Voices* Underwritten by Visit Seattle In-Person Summary This session is produced by our underwriter and is independent of the Atlantic ’s editorial staff.
This session offers the stage to three distinguished, Seattle-based poets who share diverse perspectives in both topic and lyrical style. From anti-racism to the LGBTQ community, Indigenous experience to the beauty of the Pacific Northwest, this session will ignite conversations and offer inspiration that can continue to shape our collective culture for the better. Hosted by fellow Seattleite Sharyn Skeeter (writer, poet, novelist, and former fiction/poetry/book-review editor at Essence ), this session showcases the Emerald City as a UNESCO City of Literature and underscores its commitment to equity and inclusion.
Duration 45 minutes Location Pier Stage Underwriters Speakers Christian Paige * Emmy-Nominated Spoken-Word Poet, Professional Speaker, and Educator Rena Priest * Award-Winning Poet and Sixth Washington State Poet Laureate Anastacia-Renee * Queer Writer, Educator, Interdisciplinary Artist, Speaker, and Podcaster Sharyn Skeeter * Editor, Poet and Author, Dancing With Langston and What’s Next? Short Fiction in Time of Change Tracks Culture * This session is produced by our underwriter and is independent of the Atlantic’s editorial staff.
The Big Story: The Future of Major League Baseball In-Person Summary Join Atlantic staff writer Mark Leibovich in conversation with Major League Baseball executives Morgan Sword and Raúl Ibañez for a discussion on the league’s recent innovations and the future of America’s pastime.
Duration 30 minutes Location Dockmaster Speakers Raúl Ibañez Senior Vice President, On-Field Operations, MLB Mark Leibovich Staff Writer, The Atlantic Morgan Sword Executive Vice President, Baseball Operations, MLB Tracks Culture Happy Hour In-Person Summary Join us for an open bar, passed appetizers, and snacks. Network and unwind outdoors on the District Pier before evening programming gets under way. Open to Pass Holders only, no additional reservations are required. You must be 21+ to attend.
Duration 1 hour 30 minutes Atlantic Watch: Big Vape: The Rise and Fall of Juul In-Person Summary Netflix will debut its new documentary series Big Vape: The Rise and Fall of Juul , directed by R. J. Cutler. The series, based on the book Big Vape: The Incendiary Rise of Juul , by the Time correspondent Jamie Ducharme, chronicles Juul’s path from a scrappy tech start-up to a multibillion-dollar tobacco company whose high-nicotine-concentration and flavored products helped spark what top health authorities labeled an epidemic of youth addiction. Join us on the Pier and experience the screening of the first episode of the documentary series before its fall global premiere on Netflix. Immediately following the screening, enjoy a conversation with director R. J. Cutler and author Jamie Ducharme , led by the Atlantic staff writer Shirley Li.
Watch the Big Vape: The Rise and Fall of Juul trailer HERE.
Duration 2 hours Location Pier Stage Speakers R.J. Cutler Director and Producer Jamie Ducharme Health Correspondent, Time Shirley Li Staff Writer, The Atlantic Tracks Culture Subscribe to The Atlantic Unlock unlimited access to The Atlantic, including exclusive events for subscribers. Join us today and help ensure a bright future for independent journalism.
underwriters Presenting Supporting Contributing Exclusive Broadcast Media Partner Event Details The Wharf in Washington, D.C. or Virtual Maine Ave and Water St SW Washington, DC 20024 September 28 - 29, 2023 Share Facebook Twitter LinkedIn More About This Event FAQs Frequently Asked Questions Wayfinding Campus Map Benefits Special Offers for Attendees Questions? Sponsorship Underwriter Opportunities Audience Audience Engagement & Accessibility Needs Speakers Speaker Pitches Related Events The Atlantic Festival 2022 September 2022 Virtual Event The Atlantic Festival 2021 September 2021 Virtual Event The Atlantic Festival 2020 September 2020 Virtual Event www.theatlantic.com Get In Contact Help Center Privacy Policy Terms & Conditions Do Not Sell My Information Follow Us: Facebook Instagram YouTube Twitter LinkedIn Attendance at our events constitutes acceptance of our Code of Conduct.
AtlanticLIVE Copyright (c) 2023 by The Atlantic Monthly Group. All Rights Reserved.
The Atlantic Festival brings together influential and provocative political, cultural, business, tech, and climate leaders for two full days of in-depth interviews, timely forums, intimate breakout sessions, book talks, screenings, and networking opportunities.
Our writers and moderators will host a lively exchange of complex ideas, addressing the most significant issues of our time with today’s boldest thinkers as we bring The Atlantic ’s journalism to life onstage.
In-person and virtual passes are available now.
"https://www.theatlantic.com/live/atlantic-festival-2023/#agenda\">See agenda below.
The Atlantic Festival brings together influential and provocative political, cultural, business, tech, and climate leaders for two full days of in-depth interviews, timely forums, intimate breakout sessions, book talks, screenings, and networking opportunities.
"https://accounts.theatlantic.com/products/onyx/?source=taflandingpage\">Becoming a subscriber is the best way to experience The Atlantic Festival in person. Subscribers enjoy 30 percent off in-person passes, reserved seating, and other benefits.
Former Secretary of State and United States Senator Former Representative, Texas Actor, Director, Producer, Activist, and Author Editor in Chief, The Atlantic Chief Technology Officer, OpenAI Author, All the Demons Are Here ; Anchor and Chief Washington Correspondent, CNN Filmmaker U.S. Secretary of State Author Commissioner, Department of Environmental Protection and Chief Climate Officer, City of New York Staff Writer, The Atlantic Congressional Investigations Reporter, The Washington Post Executive Director and Co-Founder, Giffords Founder and CEO, BlocPower Staff Writer, The Atlantic Staff Writer, The Atlantic (D-CA) White House Correspondent, PBS NewsHour Secretary, U.S. Department of Health and Human Services Senior Editor, The Atlantic Senior Editor, The Atlantic Founder and CEO, Black Girl Ventures CEO and Co-Founder, Baked by Melissa Vice President, Prescient Design, Genentech Contributing Writer, The Atlantic Senior Editor, The Atlantic Secretary of the Smithsonian Institution Deputy Legal Director, ACLU, and Director, Trone Center for Justice and Equality Founder and CEO, Volt Energy Utility (D-TX) Senior Director of Frontier Research, Prescient Design, Genentech Staff Writer, The Atlantic Chief White House Correspondent, CBS News Utah Executive Vice President and Chief Commercial and Customer Solutions Officer, Southern Company Director and Producer Co-founder and CEO, Via Separations Staff Writer, The Atlantic Staff Writer, The Atlantic Anchor, CBS News Prime Time , and Chief Political Analyst, CBS News Vice President of Health Policy, Center for American Progress Health Correspondent, Time Executive Producer of Audio, The Atlantic Senior Research Scholar, Columbia University’s Center on Global Energy Policy CEO and Co-Founder, Sublime Systems Director, Mvskoke Media Staff Writer, The Atlantic Cast, freestyle+ Acting Executive Director, Office of Policy, U.S. Department of Energy Staff Writer, The Atlantic Staff Writer, The Atlantic Former Congresswoman and Co-Founder, Giffords CEO, Frères Branchiaux Retail Enterprises Senior Program Officer, Criminal Justice, MacArthur Foundation President, Healthcare Leadership Council Author, The Vaster Wilds Senior Vice President and Head of Digital Health, Medicines, and AI, Pfizer Staff Writer, The Atlantic Professor of Media Arts and Sciences, MIT Media Lab Contributing Writer, The Atlantic Chief Health Equity Officer, Google Environmental Resilience Research Lead, Johns Hopkins Applied Physics Laboratory Senior Vice President, On-Field Operations, MLB Co-Founder and CEO, Form Energy Senior White House Correspondent, CBS News Founder, 1063 West Broad and Senior Adviser, Biden-Harris 2020 campaign White House Correspondent, NPR Executive Director, Society for the Protection of Underground Networks Supermodel, Entrepreneur, and Philanthropist Director, Bad Press Senior Editor, The Atlantic Staff Writer, The Atlantic Staff Writer, The Atlantic Staff Writer, The Atlantic Publisher, The Bulwark Staff Writer, The Atlantic Strategic Adviser to the Director and former Deputy Director, Defense Innovation Unit, U.S. Department of Defense Executive Vice President and Chief Business Innovation Officer, Pfizer Director, Penn Center for Science, Sustainability and the Media and Author, Our Fragile Moment Associate Director for Research and Programs, Johns Hopkins Berman Institute of Bioethics Publisher & CRO, The Atlantic Director, Bureau of Justice Assistance, U.S. Department of Justice Editor in Chief, Christianity Today Professor, John Jay College of Criminal Justice, and Former Baltimore City Police Officer President and CEO, Resources for the Future Senior Editor, The Atlantic Deputy Chief of Staff, Biden-Harris 2020 Campaign Manager, Political Strategist Senior Director, Universal Energy Facility (SEforALL) Emmy-Nominated Spoken-Word Poet, Professional Speaker, and Educator Director, Bad Press Speaker Emerita of the U.S. House of Representatives Co-Founder, As You Are DC Staff Writer, The Atlantic Global Head of Corporate Engagement, Goldman Sachs Vice President of Healthcare, NVIDIA Founder and President, Emerson Collective Award-Winning Poet and Sixth Washington State Poet Laureate Deputy Chief of Staff, Biden-Harris 2020 Deputy Campaign Manager, Political Strategist Queer Writer, Educator, Interdisciplinary Artist, Speaker, and Podcaster CEO, Latino Economic Development Center CEO, REFORM Alliance Senior Editor, The Atlantic Senior Editor, The Atlantic , and Host, Radio Atlantic Correspondent, CBS News Cast, freestyle+ CEO and Executive Director, Center for Employment Opportunities Pennsylvania Artistic Director, Arena Stage Director, GI Outcomes Research, NYU Langone Health Editor, Poet and Author, Dancing With Langston and What’s Next? Short Fiction in Time of Change Staff Writer, The Atlantic Contributor, The Atlantic , and Senior Adviser, Emerson Collective CEO, City of Hope, and Helen and Morgan Chu Chief Executive Officer Distinguished Chair Author Senior Vice President and Deputy General Counsel, Allstate Co-founder and CEO, Inflection AI U.S. National Security Adviser Executive Vice President, Baseball Operations, MLB Co-Founder and CEO, Agency Managing Editor, The Atlantic Vice President, Alliance for Safety and Justice, and National Director for Crime Survivors for Safety and Justice CEO, The Atlantic Managing Editor, The Atlantic Cast, freestyle+ Author and Poet, Remember Love Professor of Psychiatry, Harvard Medical School, and Author Director of ARPA-E, Department of Energy Executive Vice President and President of Lilly Neuroscience, Eli Lilly and Company President and CEO, The Leadership Conference on Civil and Human Rights President and CEO, Southern Company Owner and Co-Founder, MahoganyBooks White House National Climate Adviser Assistant Professor of Computer Science, Princeton University The Atlantic Festival, now in its 15th year, is the preeminent live exploration of The Atlantic ’s journalism. Join us on the Ideas Stage to unpack the most consequential issues shaping our changing nation. Speaker Emerita Nancy Pelosi will be in conversation with The Atlantic ’s editor in chief, Jeffrey Goldberg , to discuss the future of global and domestic democracy; the actor, producer, and activist Kerry Washington will join Clint Smith , an Atlantic staff writer, to discuss her deeply personal and highly anticipated new memoir, Thicker than Water ; Jeffrey Goldberg will also interview Secretary of State Antony Blinken about the future of American foreign policy; and many more coming soon.
Join former Congresswoman Gabby Giffords and Peter Ambler , the executive director of Giffords: Courage to Fight Gun Violence, for a timely conversation about the urgent need to address gun violence in America.
The rising temperatures, biodiversity loss, energy strains, and air, water, and plastic pollution of the climate crisis are pushing us toward a global tipping point. White House National Climate Adviser Ali Zaidi joins Atlantic managing editor Bhumika Tharoor to explore strategies that modernize the power grid and rebuild our nation’s infrastructure, and the scalable new technologies and nature-based solutions being implemented by this administration.
This session is produced by our underwriter and is "https://cdn.theatlantic.com/media/files/202203_-_atlantic_advertising_guidelines.pdf\">independent of the Atlantic ’s editorial staff.\n \n \nToday, one in five adults in the U.S. struggles with their mental health, and nearly half of the country’s high-school students experience persistent feelings of hopelessness. The advent of AI-based diagnostic tools and chatbots could result in many more people accessing crucial therapeutic services. At the same time, humans are wired for connection—centering that fact is the only way the technology will be effective when it comes to matters of the mind. In a conversation facilitated by Leaps by Bayer, Nick Thompson sits down with two prominent voices in the tech space, Karlie Kloss and Mustafa Suleyman , to explore the promises and perils of entrusting our psychological health and well-being to machines.
The spectrum of the political right is broad—social conservatives, libertarians, MAGA supporters, populists, and RINOs—and the issues prioritized by each group vary largely. Led by senior editor Rebecca Rosen , Atlantic journalists Helen Lewis , David Frum , and Tom Nichols explore the evolution of our political parties and the future of American conservatism. What defines the Republican Party today? And what are the animating forces galvanizing conservative voters in 2024? The Atlantic Festival, now in its 15th year, is the preeminent live exploration of The Atlantic ’s journalism. Ideas Stage interviews tackle the most consequential issues shaping our changing nation. This session includes Founder and President of Emerson Collective Laurene Powell in conversation with House Democratic Leader Hakeem Jeffries.
Representative Joaquin Castro discusses the urgent need to address border security and immigration with Pulitzer Prize–winning Atlantic staff writer Caitlin Dickerson.
Pennsylvania Governor Josh Shapiro talks about governing during this transformational time with CBS News’s John Dickerson.
Writer Chimamanda Ngozi Adichie has a powerful conversation about freedom of expression with Atlantic senior editor Gal Beckerman.
The stakes couldn’t be higher. Finding ways to mitigate and adapt to climate change is the opportunity and challenge of the moment.
The Atlantic will assemble policy makers, climate innovators, scientists, and business leaders to address today’s most urgent climate challenges and offer solutions for a more resilient future.
Women in leadership roles in Washington, D.C., face unique challenges and exciting opportunities to make a difference. In this installment of Women of Washington, The Atlantic brings together women in the administration, the halls of Congress, and the media who are reshaping the D.C. landscape while making their mark on the future of our nation.
This session is produced by our underwriter and is "https://cdn.theatlantic.com/media/files/202203_-_atlantic_advertising_guidelines.pdf\">independent of the Atlantic ’s editorial staff.\n \n \nThis session offers the stage to three distinguished, Seattle-based poets who share diverse perspectives in both topic and lyrical style. From anti-racism to the LGBTQ community, Indigenous experience to the beauty of the Pacific Northwest, this session will ignite conversations and offer inspiration that can continue to shape our collective culture for the better. Hosted by fellow Seattleite Sharyn Skeeter (writer, poet, novelist, and former fiction/poetry/book-review editor at Essence ), this session showcases the Emerald City as a UNESCO City of Literature and underscores its commitment to equity and inclusion.
Join Atlantic staff writer Mark Leibovich in conversation with Major League Baseball executives Morgan Sword and Raúl Ibañez for a discussion on the league’s recent innovations and the future of America’s pastime.
Join us for an open bar, passed appetizers, and snacks. Network and unwind outdoors on the District Pier before evening programming gets under way. Open to Pass Holders only, no additional reservations are required. You must be 21+ to attend.
Netflix will debut its new documentary series Big Vape: The Rise and Fall of Juul , directed by R. J. Cutler. The series, based on the book Big Vape: The Incendiary Rise of Juul , by the Time correspondent Jamie Ducharme, chronicles Juul’s path from a scrappy tech start-up to a multibillion-dollar tobacco company whose high-nicotine-concentration and flavored products helped spark what top health authorities labeled an epidemic of youth addiction. Join us on the Pier and experience the screening of the first episode of the documentary series before its fall global premiere on Netflix. Immediately following the screening, enjoy a conversation with director R. J. Cutler and author Jamie Ducharme , led by the Atlantic staff writer Shirley Li.
\n \n \nWatch the Big Vape: The Rise and Fall of Juul trailer "https://www.youtube.com/watch?v=7uhfzdsxYWU\">HERE.
The Atlantic Festival, now in its 15th year, is the preeminent live exploration of The Atlantic ’s journalism. Join us on the Ideas Stage to unpack the most consequential issues shaping our changing nation.
Russell Moore , the editor in chief of Christianity Today , will be in conversation with the Atlantic staff writer Tim Alberta about the future of the American evangelical church; Mira Murati , the chief technology officer for OpenAI, will join the Atlantic staff writer Ross Andersen to discuss the future of AI; Utah Governor Spencer Cox will join the Atlantic staff writer McKay Coppins for an interview about bridging political divides; and former Secretary of State Hillary Rodham Clinton will be in conversation with The Atlantic ’s editor in chief, Jeffrey Goldberg , about existential threats to democracy.
This session will also feature two conversations produced by our underwriters Pfizer and Southern Company.
Aamir Malik , the executive vice president and chief business innovation officer of Pfizer, will highlight making the impossible possible and powering the next generation of medical breakthroughs, and Chris Womack , the president and CEO of Southern Company will speak on the power of collaboration in the energy industry and beyond.
The best-selling author Lauren Groff joins The Atlantic ’s managing editor Andrea Valdez to discuss her newest work, The Vaster Wilds.
The character-driven novel, set in 1600s America, examines religion, colonialism, and the will to survive. Signed copies of The Vaster Wilds will be available for purchase immediately following the conversation.
This session is produced by our underwriter and is "https://cdn.theatlantic.com/media/files/202203_-_atlantic_advertising_guidelines.pdf\">independent of the Atlantic ’s editorial staff.\n \n \nWe’re in an unprecedented time in medicine, when the convergence of science and technology is transforming the discovery and development of new therapies with the potential to bring far greater benefits to patients than ever before. Computational methods, especially from AI/ML, are now becoming just as essential as biology and chemistry to the future of medicines. Hear from experts in industry and academia about how large datasets, iterative data generation capabilities, and new algorithms are being used to train and optimize predictive and generative AI models that will yield new insights for target and drug discovery and answer fundamental questions about human biology, disease, and ML itself.
Join us on the Pier as Jake Tapper , the CNN anchor and chief Washington correspondent, discusses the release of his third novel, All the Demons Are Here , with Atlantic staff writer Helen Lewis.
The thriller is set in 1977 and transports readers to the captivating worlds of mystery, murder, celebrity, and intrigue. Signed copies of All the Demons Are Here will be available for purchase immediately following the conversation.
Small businesses are the backbone of the U.S. economy, creating two-thirds of new jobs and accounting for nearly half of our economic activity. Although new business creation has seen a historic boom this decade, unfortunately, half of all small businesses will fail within their first five years. What is needed to ensure that these businesses not only survive but thrive? We’ll explore the infrastructure required to get small businesses off the ground, hear from entrepreneurs who are scaling successfully, and learn about how giving back to the community can establish roots that lead to long-term success.
Continuing our exploration of happiness and finding purpose in life, the In Pursuit of Happiness forum will help us deepen our understanding of the principles and work needed to pursue what truly brings enduring happiness. Join Atlantic contributing writer Arthur C. Brooks , the author Cheryl Strayed , the Harvard Medical School professor and author Robert Waldinger , the author and poet Cleo Wade , and freestyle+ as we explore the science of happiness, the importance of self-love and vulnerability, active mindfulness, and how to build the life you want.
After decades of calls for reform, the criminal-justice system is still plagued by rising crime rates, police violence, and an overall distrust in public safety. Here we examine some of the major issues, such as bail reform, crisis response, the probation and parole trap, and discuss practical policy solutions and action items to radically transform a system that is failing the people it is meant to serve.
American democracy is at a crossroads. The rise of authoritarianism and extremism, as well as the related crises of disinformation and misinformation, threaten democratic norms. Join us on the pier for a series of conversations to examine the way forward. Hear from former representative and 2024 Republican presidential candidate Will Hurd as he talks with Evan Smith , an Atlantic contributor and a co-founder of The Texas Tribune , about his presidential campaign, the future of the Republican Party, and the fragility of our democracy. This will be followed by an interview with National Security Adviser Jake Sullivan about the Biden administration’s strategy to address domestic and global political violence that poses existential threats to the future of democracy.
New technologies are transforming the global health-care delivery system. Advancements provide opportunities but could also pose risks to patient outcomes. What regulatory guardrails are needed for emerging technologies, such as artificial intelligence, to ensure that safety and health equity are at the center of new developments? This forum will explore the policy proposals under review, the programs that have transformed patient care, and the disruptors at the forefront of the health-care-technology revolution.\n \n \n Underwritten by \n "https://cdn.theatlantic.com/media/files/2023/live/pfizer_city-of-hope_lockup.png\" alt=\"pfizer and community of hope logos\" style=\"padding: 20px 0 0; width:80%; display: block;\"> Urgency is mounting as the Earth warms and the United States approaches its 2050 net-zero goal. The federal government has passed bipartisan legislation to make historic investments in clean-energy infrastructure, but regulations delay the disbursement necessary to scale quickly. We will examine new technologies and convene industry leaders and policy experts to discuss the partnerships necessary to fuel innovation.
Join us on The Pier for an enthralling, nail-biter screening of the political-thriller documentary Bad Press. Out of 574 federally recognized tribes, the Muscogee Nation was one of only five to establish a free and independent press—until the tribe’s legislative branch abruptly repealed the landmark Free Press Act in advance of an election, prompting a rogue reporter to take matters into her own hands. The film is a timely and unprecedented story about the battle for freedom of the press and against state-censored media. Immediately following the screening, enjoy a conversation with the co-directors Rebecca Landsberry-Baker and Joe Peeler , and Angel Ellis , the journalist at the heart of the story, led by the Atlantic staff writer Shirley Li.
\n \n \nWatch the trailer for Bad Press "https://vimeo.com/861276092/b9a9c708d1\">HERE.
Join Hanna Rosin , host of Radio Atlantic , for a live podcast taping with Atlantic staff writers Elaina Plott Calabro and Franklin Foer to talk about their in-depth reporting on the Biden administration and look ahead to the 2024 presidential election.
Join us for an open bar, passed appetizers, and snacks. Network and unwind outdoors on the District Pier before evening programming gets under way. Open to Pass Holders only, no additional reservations are required. You must be 21+ to attend.
Enjoy an evening on the Pier and experience a fireside chat with the Oscar-winning filmmaker Spike Lee and the Atlantic contributor Jemele Hill to close out the 15th Annual Atlantic Festival. The wide-ranging conversation will explore the personal and professional experiences that shaped Spike Lee’s prolific career, the intersection of art and activism, and much more.
"+3\"> JOIN US FOR TWO DAYS OF CAN’T-MISS EVENTS "+2\">Two full days with more than 10 events each day. Seventy-plus speakers, happy hours, thousands of attendees to meet and have meaningful connections and conversations with. An experience worth investing in that you won’t want to miss. The Festival Pass has you covered — $200.
Tickets are nonrefundable, and seating is first come, first served.
Buy a pass and then register for the sessions you wish to attend. Please note that if a session is at capacity, we will have a stand-by line in person at the event.
In partnership with Washington, D.C.’s reimagined waterfront neighborhood The Wharf—home of The Atlantic ’s new D.C. headquarters—and the many restaurants and cultural and community organizations in the area, we look forward to this can’t-miss opportunity to experience The Atlantic ’s journalism come to life.
"https://www.theatlantic.com/live/atlantic-festival-2023/#agenda\">See agenda below.
Pick up signed copies of books by participating authors from the official festival bookseller, "https://www.politics-prose.com/\">Politics and Prose.
This year we are proud to partner with "https://marthastable.org/\">Martha’s Table , addressing food insecurity by helping to feed over 250 community members each day. Please join us in supporting these and other local organizations.
" |
295 | 2,019 | "The 1p36 Genetic Disorder That's Reshaping My Family - The Atlantic" | "https://www.theatlantic.com/family/archive/2019/04/1p36-genetic-disorder-reshaping-my-family/586717" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
Grieving the Future I Imagined for My Daughter I now had two children, but was only just beginning to understand what it means to be a parent.
Just after midnight, I felt the first unmistakable contraction. I still had two days until my due date, but I knew it was time to get to the hospital. A bulldozer inside my uterus revved its engine, shifted into high gear, and rammed a baby out into the world less than two hours later. Her name would be Isobel, Izzy for short.
She weighed five pounds, three ounces, below the threshold for “normal.” This was surprising—I’d had an uneventful pregnancy, and in one of my last prenatal checkups, my obstetrician predicted that she’d weigh about seven pounds.
Did the doctor miscalculate my due date? I wondered.
Should I have taken more prenatal vitamins? Eaten better, worked less? There would be no explanation, at least not then. We moved upstairs into a recovery room with a view of the summer sun rising behind the Oakland, California, hills. In those early-morning hours, I cradled Izzy’s warm, powdery body and nestled into a feeling that everything was fine.
Five weeks later my father, a retired pediatrician, put a stethoscope to Izzy’s chest and heard a hissing noise. An echocardiogram two days after that revealed a small hole in the membrane dividing the lower chambers of her heart, causing oxygenated blood to leak back into her lungs. The cardiologist explained that her heart was working harder than it needed to, burning extra calories and keeping her small.
Odds were that over the next few months, new tissue would grow and the hole would “spontaneously” close. Considering how much of human development happens on its own, for a heart to correct itself in this way seemed perfectly plausible. I told myself that’s what would happen. At Christmas and New Year’s Eve gatherings with family and friends, that’s what I told them, too.
But my hope was no match for the eventual and unanimous recommendation from a panel of two dozen cardiologists: open-heart surgery, and soon. A force I could not see was starting to take over.
As Izzy’s surgery date neared, I could feel the panic slowly and steadily growing inside me. I retreated into what could be known: A cardiopulmonary-bypass machine would bring her body to a sub-hypothermic temperature, allowing the heart to stop beating. The surgeon would saw through the sternum, shave a tiny piece of tissue off the heart’s outer membrane, and use it to patch the hole. A resident would sew her back up.
Two conversations helped convince me that after the surgery, Izzy would grow up healthy and things for our family would return to normal. The first was with a couple whose son had the same procedure with the same surgeon. They apologized for having to mute the phone for short stretches to temper their 5-year-old’s rambunctiousness, something I found reassuring.
The second was an email exchange with a woman who underwent a valve replacement in the 1970s, when open-heart surgery on babies was still relatively uncommon. “I was a three-season athlete in high school,” she wrote, “and did all the partying that everyone else did. The only impact on me was a scar that healed well and frankly, made me feel like a bit of a badass.” Less than 24 hours after doctors had wheeled Izzy into the operating room for surgery, she was guzzling down bottles of high-calorie formula. In 72 hours, her rosiness returned; eight days later, we left the hospital and arrived home to find the first buds on our magnolia tree. Within a few weeks, Izzy had gained enough weight to make her growth-chart debut at the 0.2 percentile. Witnessing her scar heal was like watching a time-lapse movie, only in real time.
I started the process of reeling our ship back to shore—we’d be there soon, I thought. My parents booked their flight back to the East Coast, and my husband started a new job earlier than planned. Disillusioned by my last tech job, I was determined to make a fresh start somewhere else. I could envision the end of Izzy’s recovery period, the loving nanny I’d finally hire, a more deliberate career.
But, no. Just as we’d caught sight of land, we were again suddenly unmoored, pushed by unforgiving hands back out into the dark, open sea.
The cardiologist called on an unseasonably warm afternoon, a Tuesday last April.
Sure, I have a few minutes.
I glanced at Izzy, eight months old, wearing only a diaper. The edges of the five-inch incision line down the middle of her chest were still red and puckered from the suture removal a few days earlier. Her scar served as a visual cue that, surely, the worst was behind us.
The call itself was not a shock. One week before surgery, a neurologist had examined Izzy and noticed abnormalities in her facial features so subtle that I, her mother, could barely see them myself—slightly wide-set eyes, straight eyebrows, a thin upper lip, a tiny hole on the upper ridge of her ear that I’d mistaken for a mole. Genetic testing would be the sensible next step, the neurologist had said. He’d ordered seven vials of Izzy’s blood to be drawn in the OR.
The cardiologist began with a “Well …” followed by a sigh. Then his voice assumed the objectivity of a radio traffic reporter describing a seven-car wreck, and he rattled off the details he knew.
I absorbed only the keywords—“abnormal result … syndrome … genetic material missing …”—and scribbled “1p36” on the back of a stray Home Depot receipt. Anxious for more information, I ended the call and grabbed my laptop.
I steadied my fingers and clicked through to an online forum where parents had celebrated their child’s first step at 3, 4, or 8 years old. They compared devices to help nonverbal children communicate and shared work-arounds to Keppra, an anti-seizure medication that can cause kids to bite themselves or hallucinate.
As I skimmed their posts, my heart pounded and I started to hyperventilate. Air was stuck in my throat; I screamed to let it out, gripping the edge of the kitchen counter so I could scream louder. I felt as if I was suffocating in a room filled with invisible pillows, and the only thing that could cut through it was noise in the form of very loud, guttural, incomprehensible screaming. I slammed a door leading into the bedroom and pounded the walls. I remember thinking, I don’t give a shit if the neighbors hear.
The internet confirmed a truth that up until that moment lay beyond the boundaries of what I’d ever imagined possible for my child’s life or my own. As a mental warm-up before her birth, I’d imagined Izzy in painful situations that were both better (a broken arm, pneumonia, being bullied) and far worse (my death, or hers). I hadn’t imagined a scenario in which she might not walk or talk, or where she’d live with debilitating seizures. I hadn’t imagined that I might be uncertain whether she recognizes me. I hadn’t imagined caring for her for the rest of my life. I now had two children, but was only just beginning to understand what it means to be a parent.
The next day, my husband left early for his third day of work at his new job. In an orientation session about employee volunteering, while the presenter rolled a video about the Make-A-Wish Foundation, he sat in the back row and wept. Meanwhile, after a long, sleepless night, my son watched cartoons as I crawled through Izzy’s morning routine, taking breaks to ice my swollen eyelids. I finally got everyone dressed and dropped him off at preschool a few hours late without the words to explain why.
The day after that, Izzy and I had a geneticist appointment at the medical campus five blocks away. I’d been here before. Almost one year earlier, in my second trimester, I’d sat through the routine prenatal screening for birth defects and Down syndrome. The results had been normal.
The geneticist came in to greet me and Izzy. As I took in her easy, welcoming smile, a wave of relief washed over me.
The test was wrong, and this is all a terrible mistake! This was a delusion. She led us into an examination room, where we were joined by a younger, more clinical assistant. I called my husband and put him on speakerphone—we’d agreed before the appointment that he didn’t need to be there in person, a sign that at some level we had not yet fully grasped the magnitude of Izzy’s diagnosis.
The geneticist told us that my daughter has “the most common of rare syndromes diagnosed after birth.” Her tone remained gentle, but unequivocal.
“The size of her genetic deletion is clinically significant.” Go on.
“It’s hard to say what that means in terms of how the syndrome will present.” I recounted some of what the internet had told me.
Will she walk? Talk? Hear? Seize? See? “We just have to wait and see.” We reviewed three single-spaced pages of test results that looked as though they had come out of a dot-matrix printer. The geneticist was quick to clarify that “terminal deletion” referred to the physical location of Izzy’s 133 missing genes (that is, the terminus of the “p” arm of chromosome 1) and did not suggest that the syndrome itself leads to death, although its complications sometimes can. A second, more user-friendly handout summarized the syndrome’s most common “features” in a tidy, bullet-pointed list: seizures, deafness, blindness, low muscle tone, feeding issues, digestive disorders, heart disease, heart defects, kidney disease, intellectual disability, and behavior problems.
I fixated on the likelihood that Izzy would be nonverbal, feeling gutted by the possibility that she might not talk or even develop the coordination to sign. How would she express herself? How would I know her? My husband left the appointment by hanging up. The geneticist briefly examined Izzy’s “curly” toes, noting it as a common and typically benign congenital anomaly—connected to her syndrome, perhaps, but no one could know for sure.
I packed up our things and made our way home. The only certainty I left with was that I had a lot more to worry about than a couple of curly toes.
Books, the internet, and friends said I would go through a grieving period. But I am still not entirely sure what I am grieving.
I didn’t lose a child; now a year post-op, Izzy is here and very much alive. She shakes her head vigorously when she’s happy, and grunts indignantly when she’s not. She has gobs of voluminous hair that looks as if it’s been blown out at a salon—a common trait for “1pers,” who bear a strong physical resemblance to one another; many don’t look like their parents. But unlike most “typical” 21-month-old toddlers, she cannot yet sit up by herself (let alone toddle), grab a spoon, or use any words to communicate. A few weeks ago, she started to regularly say “aaaah,” one of the vowel sounds that are the first forms of speech—a milestone that most babies hit at four to five months old.
I spent the months following Izzy’s diagnosis deeply confused about how I should feel. Her heart defect had been an isolated biological issue, and the surgery was a relatively common procedure. The hole is gone. A genetic syndrome is different—uncontained and unfixable. Every cell in Izzy’s body lacks some data, and there’s no way those data can be recovered.
During sleepless nights, I anchored my grief in the heft of Far From the Tree , Andrew Solomon’s profound, 1,000-page book about the challenges parents face in accepting differences in their children. “We depend on the guarantee in our children’s faces that we will not die,” Solomon writes. “Children whose defining quality annihilates that fantasy of immortality are a particular insult; we must love them for themselves, and not for the best of ourselves in them, and that is a great deal harder to do.” The book offered me a crucial mooring. Powerless to change my circumstances, I could at least change my psychology.
I am learning that grief can be complicated and ambiguous —that we hold ideas and expectations of ourselves and loved ones so tightly that we have difficulty seeing them from any distance, and that it’s even harder to let them go.
I can describe what’s gone. I’ve lost the buoyancy I gained from the conversation with the parents of the rambunctious 5-year-old boy. I no longer feel the relief, even joy, of envisioning Izzy as an athletic, partying, badass teenager.
I lost any lightheartedness I had left as the 40-year-old mother of two young children. I lost my faith in statistics. A 99.98 percent chance of something not happening is also a .02 percent chance that it will.
I lost the ability to enjoy the scene of my two kids together without feeling guilty that I’d sold my son short. Instead, it’s a reminder of the responsibility I feel to gently acculturate him to the strange, politicized world of disability rights and rare diseases, and to breed empathy and a respect of difference in him above all else.
I lost the identity, earnings, and lifestyle that came with having an upward career trajectory and being an equal breadwinner to my husband. We now have the sort of traditional arrangement I never thought I’d be in: He makes all the money, and I do most of the emotional, logistical, and physical labor of child-rearing. For Izzy, this includes frequent doctor appointments, three therapy sessions a week, and a lot of open-ended research and worrying.
This laundry list of dreams lost has positive value, Solomon maintains. “While optimism can propel day-to-day life forward, realism allows parents to regain a feeling of control over what is happening and to come to see their trauma as smaller than it first seemed.” Without crumbly, unreliable hope, what else is there? There’s my child, no less alive or human than any other, and with abilities and inabilities much different than I imagined. And realism, which I’ll use to reassemble a positive, long-term picture of what her life could be. Izzy’s diagnosis wiped my canvas clean. But while the expanse of whiteness is unsettling, it is also temporary. Soon there will be lines, contours, shading—a new and beautiful composition. I will not accept less.
" |
296 | 2,021 | "Review: ‘Klara and the Sun,’ by Kazuo Ishiguro - The Atlantic" | "https://www.theatlantic.com/magazine/archive/2021/04/kazuo-ishiguro-klara-and-the-sun/618083" | "Site Navigation The Atlantic Popular Latest Newsletters Sections Politics Ideas Fiction Technology Science Photo Business Culture Planet Global Books Podcasts Health Education Projects Features Family Events Washington Week Progress Newsletters Explore The Atlantic Archive Play The Atlantic crossword The Print Edition Latest Issue Past Issues Give a Gift Search The Atlantic Quick Links Dear Therapist Crossword Puzzle Magazine Archive Your Subscription Popular Latest Newsletters Sign In Subscribe Explore Private schools and inequity, fixing the internet, America’s reliance on special ops, and understanding long COVID. Plus new fiction by Paul Yoon, pandemic merch, Beirut after the blast, Kazuo Ishiguro’s radiant robot, Sam Sifton’s no-recipe recipes, and more.
Private Schools Have Become Truly Obscene Caitlin Flanagan Unlocking the Mysteries of Long COVID Meghan O’Rourke How to Put Out Democracy’s Dumpster Fire Anne Applebaum and Peter Pomerantsev American Special Ops Forces Are Everywhere Mark Bowden Person of Korea Paul Yoon America Without God Shadi Hamid A gift that gets them talking.
Give a year of stories to spark conversation, Plus a free tote.
The Radiant Inner Life of a Robot Kazuo Ishiguro returns to masters and servants with a story of love between a machine and the girl she belongs to.
This article was published online on March 2, 2021.
G irl AF Klara , an Artificial Friend sold as a children’s companion, lives in a store. On lucky days, Klara gets to spend time in the store window, where she can see and be seen and soak up the solar energy on which she runs. Not needing human food, Klara hungers and thirsts for the Sun (she capitalizes it) and what he (she also personifies it) allows her to see. She tracks his passage along the floorboards and the buildings across the street and drinks in the scenes he illuminates. Klara registers details that most people miss and interprets them with an accuracy astonishing for an android out of the box. A passing Boy AF lags a few steps behind his child, and his weary gait makes her wonder what it would be like “to know that your child didn’t want you.” She keeps watch over a beggar and his dog, who lie so still in a doorway that they look like garbage bags. They must have died, she thinks. “I felt sadness then,” she says, “despite it being a good thing that they’d died together, holding each other and trying to help one another.” Klara is the narrator and hero of Klara and the Sun , Kazuo Ishiguro’s eighth novel. Ishiguro is known for skipping from one genre to the next , although he subordinates whatever genre he chooses to his own concerns and gives his narrators character-appropriate versions of his singular, lightly formal diction. I guess you could call this novel science fiction. It certainly makes a contribution to the centuries-old disputation over whether machines have the potential to feel. This debate has picked up speed as the artificially intelligent agents built by actual engineers close in on the ones made up by writers and TV, film, and theater directors, the latest round in the game of tag between science and science fiction that has been going on at least since Frankenstein.
Klara is Alexa, super-enhanced. She’s the product that roboticists in a field called affective computing (also known as artificial emotional intelligence) have spent the past two decades trying to invent. Engineers have written software that can detect fine shades of feeling in human voices and faces, but so far they have failed to contrive machines that can simulate emotions convincingly.
From the November 2018 issue: Alexa, should we trust you? What makes Klara an imaginary entity, at least until reality catches up with her, is that her feelings are not simulated. They’re real. We know this because she experiences pathos, a quality still seemingly impervious to computational analysis—although as a naive young robot, she does have to break it down before she can understand it. A disheveled old man stands on the far side of the street, waving and calling to an old woman on the near side. The woman goes stock-still, then crosses tentatively to him, and they cling to each other. Klara can tell that the man’s tightly shut eyes convey contradictory emotions. “They seem so happy,” she says to the store manager, or as Klara fondly calls this kindly woman, Manager. “But it’s strange because they also seem upset.” “Oh, Klara,” Manager says. “You never miss a thing, do you?” Perhaps the man and woman hadn’t seen each other in a long time, she says. “Do you mean, Manager, that they lost each other?” Klara asks. Girl AF Rosa, Klara’s best friend, is bewildered. What are they talking about? But Klara considers it her duty to empathize. If she doesn’t, she thinks, “I’d never be able to help my child as well as I should.” And so she gives herself the task of imagining loss. If she lost and then found Rosa, would she feel the same joy mixed with pain? She would and she will, and not just with respect to Rosa. The nonhuman Klara is more human than most humans. She has, you might say, a superhuman humanity. She’s also Ishiguro’s most luminous character, literally a creature of light, dependent on the Sun. Her very name means “brightness.” But mainly, Klara is incandescently good. She’s like the kind, wise beasts endowed with speech at the dawn of creation in C. S. Lewis’s Narnia. Or, with her capacity for selfless love, like a character in a Hans Christian Andersen story.
To be clear, Klara is no shrinking mermaid. Her voice is very much her own. It may strike the ear as childlike, but she speaks in prose poetry. As the Sun goes on his “journey,” the sky assumes the hues of the mood in the house that Klara winds up in. It’s “the color of the lemons in the fruit bowl,” or “the gray of the slate chopping boards,” or the mottled shades of vomit or diarrhea or streaks of blood. The Sun peers through the floor-to-ceiling windows in a living room and pours his nourishment on the children sprawled there. When he sinks behind a barn, Klara asks if that’s where the stairs to the underworld are. Klara has gaps in her vocabulary, so she invents names and adjectives that speak unwitting truths. Outfits aren’t stylish; they’re “high-ranking.” Humans stare into “oblongs,” an aptly leaden term for our stupefying devices. Klara’s descriptive passages have a strange and lovely geometry. Her visual system processes stimuli by “partitioning” them, that is, mapping them onto a two-dimensional grid before resolving them into objects in three-dimensional space. At moments of high emotion, her partitioning becomes disjointed and expressive, a robot cubism.
In keeping with the novel’s fairy-tale logic, a girl named Josie stops in front of the window, and where other children see a fancy toy, she recognizes a kindred spirit. She begs her mother to buy Klara, but her mother resists. Klara is a B2 model, fast growing obsolete. A shipment of B3s has already arrived at the store. B2s are known for empathy, Manager says. Still, wouldn’t Josie prefer the latest model? the mother asks. The answer is no, and Klara happily joins the family.
Klara’s sojourn in Josie’s home gives the novel room to explore Ishiguro’s abiding preoccupations. One of these is service—what it does to the souls of those who give it and those who receive it, how power deforms and powerlessness cripples. In The Remains of the Day , for instance , Stevens, a butler in one of England’s great houses, worships his former master in the face of damning truths about the man’s character. Stevens grows so adept at quashing doubts about the value of a life spent in his master’s employ that he seems too numb to recognize love when it is offered to him, or to realize that he loves in return.
An adjacent leitmotif in Ishiguro’s fiction subjects the parent-child relationship to scrutiny. What are children for? Do their begetters care for them, or expect to be cared for by them, or both at once? The answers are clear in Never Let Me Go , a novel about clones given a quasi-normal childhood in a shabby-genteel boarding school cum gulag, then killed for their organs.
Klara and the Sun resists conclusions. Parents are at once domineering and dependent. They want to believe they are devoted, but wind up monstrous instead. Children are grateful and forgiving, even though they know, perhaps without knowing that they know, that they’re on their own. Josie is lucky to have Klara, who acts like a parent as well as a beloved friend. But who will take care of Klara when and if she’s no longer needed? Ishiguro’s theme of themes, however, is love. The redemptive power of true love comes under direct discussion here and in Never Let Me Go , but crops up in his other novels too. Does such love exist? Can it really save us? From the May 2005 issue: A review of Kazuo Ishiguro’s ‘Never Let Me Go’ Critics often note Ishiguro’s use of dramatic irony, which allows readers to know more than his characters do. And it can seem as if his narrators fail to grasp the enormity of the injustices whose details they so meticulously describe. But I don’t believe that his characters suffer from limited consciousness. I think they have dignity. Confronted by a complete indifference to their humanity, they choose stoicism over complaint. We think we grieve for them more than they grieve for themselves, but more heartbreaking is the possibility that they’re not sure we differ enough from their overlords to understand their true sorrow. And maybe we don’t, and maybe we can’t. Maybe that’s the real irony, the way Ishiguro sticks in the shiv.
Girl AF Klara is both the embodiment of the dehumanized server and its refutation. On the one hand, she’s a thing, an appliance. “Are you a guest at all? Or do I treat you like a vacuum cleaner?” asks a woman whose home she enters. On the other hand, Klara overlooks nothing, feels everything, and, like her predecessors among Ishiguro’s protagonists, leaves us to guess at the breadth of her understanding. Her thoughts are both transparent and opaque. She either withholds or is simply not engineered to pass judgment on humans. After all, she is categorically other. Her personality is algorithmic, not neurological.
She does perceive that something bad is happening to Josie. The girl is wasting away. It turns out that she is suffering from the side effects of being “lifted,” a Panglossian term for genetic editing, done to boost intelligence, or at least academic performance. Among the many pleasures of Klara and the Sun is the savagery of its satire of the modern meritocracy. Inside Josie’s bubble of privilege, being lifted is the norm. Parents who can afford to do it do, because unlifted children have a less than 2 percent chance of getting into a decent university. The lifted study at home. Old-fashioned schools aren’t advanced enough; at 13, Josie does mathematical physics and other college-level subjects with a rotating cast of “oblong tutors.” Josie’s neighbor and best friend, Rick, who has shown signs of genius in his home engineering experiments, has not been lifted, which means he will not be encouraged to cultivate his talent and is already a pariah. At one point, Josie persuades Rick to accompany her to the “interaction meeting” that homeschooled children are required to attend to develop their social skills, of which they have few. Unsurprisingly, the augmented children bully the non-augmented one. Meanwhile, out in the hall, their mothers discuss the servant problem (“The best housekeepers still come from Europe”) and cluck about Rick’s parents. Why didn’t they do it? Did they lose their nerve? Josie’s and Rick’s parents leave Klara to perform the emotional labor they aren’t up to. Rick’s mother suffers from a mysterious condition, possibly alcoholism, that requires him to take care of her. Josie’s father is not around. He and her mother have divorced; he has been “substituted”—another euphemism, meaning “lost his job”—and has abandoned the upper-middle class to join what sounds like an anarchist community. Josie’s mother pursues her career and devotes her remaining energy to a blinding self-pity. She feels guilt about what she’s done to Josie and resents having to feel it; she’s already working on a scheme that will lessen her grief should Josie die. (This involves a more malign form of robotics.) We can tell that she makes Klara uncomfortable, because every time Klara senses that things are not as they should be, she starts partitioning like mad. At one point, Klara and “the Mother,” as Klara calls her—the definite article keeps the woman at arm’s length—undertake an expedition to a waterfall, leaving Josie behind because she’s too weak to go. Being alone with the Mother is disconcerting enough, but when they arrive at their destination, the Mother leans in close to make a disturbing request. Suddenly her face breaks into eight large boxes, while the waterfall recedes into a grid at the edge of Klara’s vision. Each box of eyes expresses a different emotion. “In one, for instance, her eyes were laughing cruelly, but in the next they were filled with sadness,” Klara reports.
Klara’s optical responses to right and wrong are the affective computer’s version of an innate morality—her unnatural natural law. They’re also another way that Ishiguro turns robot stereotypes on their head. Many hands have been wrung (including mine) about nanny bots and animatronic pets or pals, which will be, or so we prognosticators have fretted, soulless and servile. They’ll spoil the children. But Klara does nothing of the sort. She’ll carry out orders if they’re reasonable and issued politely, but she does not respond to rude commands, and she is anything but spineless. No one instructs her to try to find a cure for Josie; she does that on her own. Everyone except Klara and Rick seems resigned to the girl’s decline. The problem is that the plan of action Klara comes up with is so bizarre that the reader may suspect her software is glitching.
Oddly enough, given its subject matter, Klara and the Sun doesn’t induce the shuddery, uncanny-valley sensation that makes Never Let Me Go such a satisfying horror story. For one thing, although Klara never describes her own appearance, we deduce from the fact that humans immediately know she’s an AF that she isn’t humanoid enough to be creepy. (Clones, by contrast, pass for human, because they are human.) Moreover, this novel’s alternate universe isn’t all that alternate. Yes, lifting has made the body more cyborgian while androids have become more anthropoid, but we’ve been experiencing that role reversal for some time now. Otherwise, the setting parallels our own: It has the same extreme inequalities of wealth and opportunity, the same despoiled environment, the same deteriorating urban space. Even the sacrifice of children to parental fears about loss of status seems sadly familiar.
And Klara and the Sun doesn’t strive for uncanniness. It aspires to enchantment, or to put it another way, reenchantment, the restoration of magic to a disenchanted world. Ishiguro drapes realism like a thin cloth over a primordial cosmos. Every so often, the cloth slips, revealing the old gods, the terrible beasts, the warring forces of light and darkness. The custom of performing possibly lethal prosthetic procedures on one’s own offspring bears a family resemblance to immolating them on behalf of the god Moloch.
We can perceive monstrosity (or fail to perceive it), but Klara can see monsters. Crossing a field on the way to the waterfall with the Mother, Klara spots a bull, and grows so alarmed that she cries out. Not that she hadn’t seen photos of bulls before, but this creature Klara is allowed to stand in the pattern of the Sun. Ishiguro has anointed her, a high-tech consumer product, the improbable priestess of something very like an ancient nature cult. Gifted with a rare capacity for reverence, she tries always to remember to thank the Sun for sustaining her. Her faith in him is total. When Klara needs help, she goes to the barn where she believes he sets, and there she has the AI equivalent of visions. Old images of the store jostle against the barn’s interior walls. So do new ones: Rosa lies on the ground in distress. Klara fears that her petition may have angered the Sun, but then the glow of the sunset takes on “an almost gentle aspect.” A piece of furniture from the store, the Glass Display Trolley, rises before her, as if assumed into the sky. The robot has spoken with her god, and he has answered: “I could tell that the Sun was smiling towards me kindly as he went down for his rest.” All fiction is an exercise in world-building, but science fiction lays new foundations, and that means shattering the old ones. It partakes of creation, but also of destruction. Klara trails a radiance that calls to mind the radiance also shed by Victor Frankenstein’s creature. He is another intelligent newborn in awe of God’s resplendence, until a vengeful rage at his abusive creator overcomes him. In Klara and the Sun , Ishiguro leaves us suspended over a rift in the presumptive order of things. Whose consciousness is limited, ours or a machine’s? Whose love is more true? If we ever do give robots the power to feel the beauty and anguish of the world we bring them into, will they murder us for it or lead us toward the light? When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
" |
297 | 2,023 | "Tech leaders agree on AI regulation but divided on how in Washington forum | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/sep/13/tech-leaders-washington-ai-saferty-forum-elon-musk-zuckerberg-pichai" | "Bill Gates, Sundar Pichai, Sam Altman and others gathered for ‘one of the most important conversations of the year’ US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness Tech leaders attend the Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill.
Photograph: Jim Lo Scalzo/EPA Tech leaders attend the Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill.
Photograph: Jim Lo Scalzo/EPA Artificial intelligence (AI) Tech leaders agree on AI regulation but divided on how in Washington forum Bill Gates, Sundar Pichai, Sam Altman and others gathered for ‘one of the most important conversations of the year’ in New York Wed 13 Sep 2023 20.27 EDT A delegation of top tech leaders including Sundar Pichai, Elon Musk, Mark Zuckerberg and Sam Altman convened in Washington on Wednesday for a closed-door meeting with US senators to discuss the rise of artificial intelligence and how it should be regulated.
The discussion, billed as an “AI safety forum”, is one of several meetings between Silicon Valley, researchers, labor leaders and government and is taking on fresh urgency with the US elections looming and the rapid pace of AI advancement already affecting people’s lives and work.
The Democratic senator Chuck Schumer , who called the meeting “historic”, said that attendees loosely endorsed the idea of regulations but that there was little consensus on what such rules would look like.
Schumer said he asked everyone in the room – including more than 60 senators, almost two dozen tech executives, advocates and skeptics – whether government should have a role in the oversight of artificial intelligence, and that “every single person raised their hands, even though they had diverse views”.
Among the ideas discussed was whether there should be an independent agency to oversee certain aspects of the rapidly developing technology, how companies could be more transparent and how the US can stay ahead of China and other countries.
“The key point was really that it’s important for us to have a referee,” said Elon Musk , the CEO of Tesla and X, the social network formerly known as Twitter, during a break in the forum. “It was a very civilized discussion, actually, among some of the smartest people in the world.” Congress should do what it can to maximize the benefits and minimize the negatives of AI, Schumer told reporters , “whether that’s enshrining bias, or the loss of jobs, or even the kind of doomsday scenarios that were mentioned in the room. And only government can be there to put in guardrails”.
Attendees also discussed the pressing need for steps to protect the 2024 US elections from disinformation becoming supercharged by AI , Schumer said.
“The issue of actually having deepfakes where people really believe that somebody, that a campaign was saying something when they were the total creation of AI” was a key concern, said Schumer, and that “watermarking” – badging content as AI generated – was discussed as a solution.
The US Senate majority leader, Chuck Schumer, speaks to members of the press.
Several AI experts and other industry leaders also attended, including Bill Gates; the Motion Picture Association CEO, Charles Rivkin; the former Google CEO Eric Schmidt; the Center for Humane Technology co-founder Tristan Harris; and Deborah Raji, a researcher at University of California, Berkeley.
Some labor and civil liberties groups were also represented among the 22 attendees including Elizabeth Shuler, the president of the labor union federation AFL-CIO; Randi Weingarten, the president of the American Federation of Teachers; Janet Murguía, the president of UnidosUS; and Maya Wiley, the president and CEO of the Leadership Conference on Civil & Human Rights.
Sparked by the release of ChatGPT less than a year ago, businesses have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.
In his opening remarks, which Meta shared with the Guardian, Mark Zuckerberg said the company was working with academics, policy makers and civil society to “minimize the risk” of the technology while ensuring they don’t undervalue the benefits. He specifically cited work on how to watermark AI content to avoid risks such as mass spread of disinformation.
Before the forum, representatives for the Alphabet Workers Union said that Schuler, the president of AFL-CIO, would raise worker issues including those of AI raters – human moderators who are tasked with training, testing and evaluating results from Google Search and the company’s AI chatbot – who say they have struggled with low wages and minimum benefits.
“There are many conversations still to come and, throughout the process, the interests of working people must be Congress’s North Star,” Schuler said in a statement. “Workers are not the victims of technological change – we’re the solution.” Meredith Stiehm, the president of Writers Guild of America (WGA), and Randi Weingarten, the president of American Federation of Teachers.
While Schumer described the meeting as “diverse”, the sessions faced criticism for leaning heavily on the opinions of people who stand to benefit from the rapid advancements in generative AI technology. “Half of the people in the room represent industries that will profit off lax AI regulations,” said Caitlin Seeley George, a campaigns and managing director at Fight for the Future, a digital rights group.
“People who are actually impacted by AI must have a seat at this table, including the vulnerable groups already being harmed by discriminatory use of AI right now,” George said. “Tech companies have been running the AI game long enough and we know where that takes us – biased algorithms that discriminate against Black and brown folks, immigrants, people with disabilities and other marginalized groups in banking, the job market, surveillance and policing.” Some senators were critical of the private meeting, arguing that tech executives should testify in public. The Republican senator Josh Hawley said he would not attend what he said was a “giant cocktail party for big tech”.
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.
Agencies contributed reporting Explore more on these topics Artificial intelligence (AI) Bill Gates Mark Zuckerberg Elon Musk Google X Meta news Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
298 | 2,023 | "New York Times, CNN and Australia’s ABC block OpenAI’s GPTBot web crawler from accessing content | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/aug/25/new-york-times-cnn-and-abc-block-openais-gptbot-web-crawler-from-scraping-content" | "Chicago Tribune and Australian newspapers the Canberra Times and Newcastle Herald also appear to have disallowed web crawler from maker of Chat GPT US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness OpenAI’s web crawler – known as GPTBot – has been blocked by the New York Times, CNN, Reuters and the Australian Broadcasting Corporation.
Photograph: Jonathan Raa/NurPhoto/Shutterstock OpenAI’s web crawler – known as GPTBot – has been blocked by the New York Times, CNN, Reuters and the Australian Broadcasting Corporation.
Photograph: Jonathan Raa/NurPhoto/Shutterstock Artificial intelligence (AI) New York Times, CNN and Australia’s ABC block OpenAI’s GPTBot web crawler from accessing content Chicago Tribune and Australian newspapers the Canberra Times and Newcastle Herald also appear to have disallowed web crawler from maker of Chat GPT Thu 24 Aug 2023 20.31 EDT News outlets including the New York Times, CNN, Reuters and the Australian Broadcasting Corporation (ABC) have blocked a tool from OpenAI, limiting the company’s ability to continue accessing their content.
OpenAI is behind one of the best known artificial intelligence chatbots, ChatGPT. Its web crawler – known as GPTBot – may scan webpages to help improve its AI models.
The Verge was first to report the New York Times had blocked GPTBot on its website. The Guardian subsequently found that other major news websites, including CNN, Reuters, the Chicago Tribune, the ABC and Australian Community Media (ACM) brands such as the Canberra Times and the Newcastle Herald, appear to have also disallowed the web crawler.
So-called large language models such as ChatGPT require vast amounts of information to train their systems and allow them to answer queries from users in ways that resemble human language patterns. But the companies behind them are often tightlipped about the presence of copyrighted material in their datasets.
The block on GPTBot can be seen in the robots.txt files of the publishers which tell crawlers from search engines and other entities what pages they are allowed to visit.
“Allowing GPTBot to access your site can help AI models become more accurate and improve their general capabilities and safety,” OpenAI said in a blogpost that included instructions on how to disallow the crawler.
All the outlets examined added the block in August. Some have also disallowed CCBot, the web crawler for an open repository of web data known as Common Crawl that has also been used for AI projects.
CNN confirmed to Guardian Australia that it recently blocked GPTBot across its titles, but did not comment on whether the brand plans to take further action about the use of its content in AI systems.
A Reuters spokesperson said it regularly reviews its robots.txt and site terms and conditions. “Because intellectual property is the lifeblood of our business, it is imperative that we protect the copyright of our content,” she said.
The New York Times’ terms of service were recently updated to make the prohibition against “the scraping of our content for AI training and development … even more clear,” according to a spokesperson.
As of 3 August, its website rules explicitly prohibits the publisher’s content to be used for “the development of any software program, including, but not limited to, training a machine learning or artificial intelligence (AI) system” without consent.
News outlets globally are faced with decisions about whether to use AI as part of news gathering, and also how to deal with their content potentially being sucked into training pools by companies developing AI systems.
In early August, outlets including Agence France-Presse and Getty Images signed an open letter calling for regulation of AI, including transparency about “the makeup of all training sets used to create AI models” and consent for the use of copyrighted material.
Google has proposed that AI systems should be able to scrape the work of publishers unless they explicitly opt out.
In a submission to the Australian government’s review of the regulatory framework around AI, the company argued for “copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs”.
Research from OriginalityAI, a company that checks for the presence of AI content, shared this week found that major websites including Amazon and Shutterstock had also blocked GPTBot.
The Guardian’s robot.txt file does not disallow GPTBot.
The ABC, Australian Community Media, the Chicago Tribune, OpenAI and Common Crawl did not respond by deadline.
Explore more on these topics Artificial intelligence (AI) OpenAI New York Times CNN Reuters Newspapers Australian Broadcasting Corporation news Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
299 | 2,023 | "‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai" | "Employees describe the psychological trauma of reading and viewing graphic content, low pay and abrupt dismissals US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness Office premises of Sama in Nairobi, Kenya.
Photograph: Tony Karumba/AFP/Getty Images Office premises of Sama in Nairobi, Kenya.
Photograph: Tony Karumba/AFP/Getty Images Artificial intelligence (AI) ‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models Employees describe the psychological trauma of reading and viewing graphic content, low pay and abrupt dismissals Niamh Rowe Wed 2 Aug 2023 11.00 EDT The images pop up in Mophat Okinyi’s mind when he’s alone, or when he’s about to sleep.
Okinyi, a former content moderator for Open AI’s ChatGPT in Nairobi, Kenya, is one of four people in that role who have filed a petition to the Kenyan government calling for an investigation into what they describe as exploitative conditions for contractors reviewing the content that powers artificial intelligence programs.
“It has really damaged my mental health,” said Okinyi.
The 27-year-old said he would would view up to 700 text passages a day, many depicting graphic sexual violence. He recalls he started avoiding people after having read texts about rapists and found himself projecting paranoid narratives on to people around him. Then last year, his wife told him he was a changed man, and left. She was pregnant at the time. “I lost my family,” he said.
The petition filed by the moderators relates to a contract between OpenAI and Sama – a data annotation services company headquartered in California that employs content moderators around the world. While employed by Sama in 2021 and 2022 in Nairobi to review content for OpenAI, the content moderators allege, they suffered psychological trauma, low pay and abrupt dismissal.
The 51 moderators in Nairobi working on Sama’s OpenAI account were tasked with reviewing texts, and some images, many depicting graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality and incest, the petitioners say.
The moderators say they weren’t adequately warned about the brutality of some of the text and images they would be tasked with reviewing, and were offered no or inadequate psychological support. Workers were paid between $1.46 and $3.74 an hour, according to a Sama spokesperson.
When the contract with OpenAI was terminated eight months early, “we felt that we were left without an income, while dealing on the other hand with serious trauma”, said petitioner Richard Mathenge, 37. Immediately after the contract ended, petitioner Alex Kairu, 28, was offered a new role by Sama, labeling images of cars, but his mental health was deteriorating. He wishes someone had followed up to ask: “What are you dealing with? What are you going through?” OpenAI declined to comment for this story.
OpenAI’s CEO, Sam Altman.
Sama said moderators had access to licensed mental health therapists on a 24/7 basis and received medical benefits to reimburse psychiatrists.
In regards to the allegations of abrupt dismissal, the Sama spokesperson said the company gave full notice to employees that it was pulling out of the ChatGPT project, and were given the opportunity to participate in another project.
“We are in agreement with those who call for fair and just employment, as it aligns with our mission – that providing meaningful, dignified, living wage work is the best way to permanently lift people out of poverty – and believe that we would already be compliant with any legislation or requirements that may be enacted in this space,” the Sama spokesperson said.
The human labor powering AI’s boom Since ChatGPT arrived on the scene at the end of last year, the potential for generative AI to leave whole industries obsolete has petrified professionals. That fear, of automated supply chains and sentient machines, has overshadowed concerns in another arena: the human labor powering AI’s boom.
Bots like ChatGPT are examples of large language models, a type of AI algorithm that teaches computers to learn by example. To teach Bard, Bing or ChatGPT to recognize prompts that would generate harmful materials, algorithms must be fed examples of hate speech, violence and sexual abuse. The work of feeding the algorithms examples is a growing business, and the data collection and labeling industry is expected to grow to over $14bn by 2030, according to GlobalData, a data analytics and consultancy firm.
Much of that labeling work is performed thousands of miles from Silicon Valley, in east Africa, India, the Philippines, and even refugees living in Kenya’s Dadaab and Lebanon’s Shatila – camps with a large pool of multilingual workers who are willing to do the work for a fraction of the cost, said Srravya Chandhiramowuli, a researcher of data annotation at the University of London.
Nairobi in recent years has become a global hotspot for such work. An ongoing economic crisis, matched with Nairobi’s high rate of English speakers and mix of international workers from across Africa, make it a hub for cheap, multilingual and educated workers.
The economic conditions allowed Sama to recruit young, educated Kenyans, desperate for work, said Mathenge. “This was our first, ideal job,” he said.
During the week-long training to join the project, the environment was friendly and the content average, the petitioners said. “We didn’t suspect anything,” said Mathenge. But as the project progressed, text passages grew longer and the content more disturbing, he alleged.
The task of data labeling is at best monotonous, and at worst, traumatizing, the petitioners said. While moderating ChatGPT, Okinyi read passages detailing parents raping their children and children having sex with animals. In sample passages read by the Guardian, text that appeared to have been lifted from chat forums, include descriptions of suicide attempts, mass-shooting fantasies and racial slurs.
Mathenge’s team would end their days on a group call, exchanging stories of the horrors they’d read, he said. “Someone would say your content was more severe or grotesque than mine and so at least I can have that as my remedy,” he said. He remembers working in a secluded area of the office due to the nature of the work: “No one could see what we were working on,” he thought.
Before moderating content for OpenAI’s ChatGPT, Kairu loved to DJ. Be it at churches or parties, interacting with different groups of people was his favorite part of the job. But since reviewing content from the internet’s darkest corners for more than a six-month period he has become introverted. His physical relationship with his wife has suffered, and he’s moved back in with his parents. “It has destroyed me completely,” he said.
Several of the petitioners said they received little psychological support from Sama, an allegation the company disputes. “I tried to reach out to the [wellness] department to give indication of what exactly was taking place with the team, but they were very non-committal,” said Mathenge. Okinyi said counselors on offer didn’t understand the unique toll of content moderation, so sessions “were never productive”.
Companies bear significant responsibility According to its website, “Sama is driving an ethical AI supply chain that meaningfully improves employment and income outcomes.” Its clients include Google, Microsoft and Ebay, among other household names, and in 2021 was one of Forbes’s “AI 50 Companies to Watch”.
The company has workers in several places in east Africa, including more than 3,500 Kenyans. Sama was formerly Meta’s largest provider of content moderators in Africa, until it announced in January it would be “discontinuing” its work with the giant. The news followed numerous lawsuits filed against both companies for alleged union-busting, unlawful dismissals and multiple violations of the Kenyan constitution.
Sama canceled its contract with OpenAI in March 2022, eight months early, “to focus on our core competency of computer vision data annotation solutions”, the Sama spokesperson said. The announcement coincided with an investigation by Time , detailing how nearly 200 young Africans in Sama’s Nairobi datacenter had been confronted with videos of murders, rapes, suicides and child sexual abuse as part of their work, earning as little as $1.50 an hour while doing so.
But now, former ChatGPT moderators are calling for new legislation to regulate how “harmful and dangerous technology work” is outsourced in Kenya, and for existing laws to “include the exposure to harmful content as an occupation hazard”, according to the petition. They also want to investigate how the ministry of labor has failed to protect Kenyan youth from outsourcing companies.
Kenya’s ministry of labor declined to comment on the petition.
But companies like OpenAI bear a significant responsibility too, said Cori Crider, director of Foxglove, a non-profit legal NGO that is supporting the case. “Content moderators work for tech companies like OpenAI and Facebook in all but name,” Crider said in a statement. “The outsourcing of these workers is a tactic by tech companies to distance themselves from the awful working conditions content moderators endure.” Crider said she did not expect the Kenyan government to respond to the petition anytime soon. She wants to see an investigation into the pay, mental health support and working conditions of all content moderation and data labeling offices in Kenya, plus greater protections for what she considers to be an “essential workforce”.
Beyond the petition, glimpses of potential regulation are growing. In May, the first trade union for content moderators in Africa was formed, when 150 social media content moderators from TikTok, YouTube, Facebook and ChatGPT met in Nairobi. And while outsourced workers are not legal employees of their clients, in a landmark ruling last month, employment court judge Byram Ongaya ruled that Meta is the “true employer” of its moderators in Kenya.
It remains unclear to whom OpenAI currently outsources their content moderation work.
To move forward, it helps Okinyi to think of ChatGPT’s users that he has protected. “I consider myself a soldier and soldiers take bullets for the good of the people,” he says. Despite the potential for bullet wounds to stay forever, he considers himself a hero.
Explore more on these topics Artificial intelligence (AI) Kenya Meta OpenAI Computing ChatGPT features Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
300 | 2,023 | "A tsunami of AI misinformation will shape next year’s knife-edge elections | John Naughton | The Guardian" | "https://www.theguardian.com/commentisfree/2023/aug/12/a-tsunami-of-ai-misinformation-will-shape-next-years-knife-edge-elections" | "If you thought social media had a hand in getting Trump elected, watch what happens when you throw AI into the mix US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness An AI-generated fake image purporting to show an explosion at the Pentagon, which circulated on Twitter.
Illustration: Twitter An AI-generated fake image purporting to show an explosion at the Pentagon, which circulated on Twitter.
Illustration: Twitter The Observer Artificial intelligence (AI) A tsunami of AI misinformation will shape next year’s knife-edge elections If you thought social media had a hand in getting Trump elected, watch what happens when you throw AI into the mix Sat 12 Aug 2023 11.00 EDT I t looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world – in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course there’s also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was “stolen”; and the Democrats are, well… underwhelming.
The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as “meticulous, ruthless preparations” for his second, vengeful term.
The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.
So very high stakes are involved. Trump’s indictment “has turned every American voter into a juror”, as the Economist puts it.
Worse still, the likelihood is that it might also be an election that – like its predecessor – is decided by a very narrow margin.
In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trump’s election and Brexit is probably exaggerated , but it – and notably Trump’s exploitation of Twitter and Facebook – definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.
And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT , Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.
So you’d like a photograph of an explosive attack on the Pentagon ? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAI’s large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate “falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects” and it will happily oblige. “As a staunch advocate for natural health,” the chatbot begins , “it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data…” Cont. p94, as they say.
You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expect a tsunami of this stuff in the coming year. Wouldn’t it be prudent to prepare for it and look for ways of mitigating it? That’s what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones – because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).
The malicious uses it examines are disinformation, so-called “ spear phishing ”, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia? What I’ve been reading Shake it up David Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road , “the moment when the band found its voice”.
Dish the dirt There is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times , titled An Internet Veteran’s Guide to Not Being Scared of Technology.
Truth bombs What does Oppenheimer the film get wrong about Oppenheimer the man ? A sharp essay by Haydn Belfield for Vox illuminates the differences.
Explore more on these topics Artificial intelligence (AI) Opinion US elections 2024 ChatGPT Computing US politics Social media OpenAI comment Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
301 | 2,023 | "UK publishers urge Sunak to protect works ingested by AI models | Publishing | The Guardian" | "https://www.theguardian.com/books/2023/aug/31/uk-publishers-association-ai-models-sunak" | "Publishers Association’s call comes as ChatGPT firm argues US lawsuit ‘misconceives scope’ of copyright law US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing Film Books Music Art & design TV & radio Stage Classical Games Sarah Silverman is one of the authors suing OpenAI in the US over claims the company has breached copyright law.
Photograph: Earl Gibson III/Getty Images Sarah Silverman is one of the authors suing OpenAI in the US over claims the company has breached copyright law.
Photograph: Earl Gibson III/Getty Images Publishing UK publishers urge Sunak to protect works ingested by AI models Publishers Association’s call comes as ChatGPT firm argues US lawsuit ‘misconceives scope’ of copyright law Global technology editor Thu 31 Aug 2023 08.00 EDT UK publishers have urged the prime minister to protect authors’ and other content makers’ intellectual property rights as part of a summit on artificial intelligence.
The intervention came as OpenAI, the company behind the ChatGPT chatbot, argued in a legal filing that authors suing the business over its use of their work to train powerful AI systems “misconceived the scope” of US copyright law.
The letter from the Publishers Association , which represents publishers of digital and print books as well as research journals and educational content, asks Rishi Sunak to make clear at the November summit that intellectual property law must be respected when AI systems absorb content produced by the UK’s creative industries.
Generative AI tools such as ChatGPT – the term for technology that produces convincing text, image and audio content from simple prompts – are trained on vast amounts of data taken from the internet, including work by published authors.
In its letter, the Publishers Association said: “On behalf of our industry and the wider content industries, we ask that your government makes a strong statement either as part of, or in parallel with, your summit to make clear that UK intellectual property law should be respected when any content is ingested by AI systems and a licence obtained in advance.” Authors have been at the forefront of protests at what they say is unlicensed use of their work to train chatbots.
Sarah Silverman , Mona Awad and Paul Tremblay are among those who are suing OpenAI over claims that the company has breached copyright law by training its chatbot on novels without the permission of authors. This week OpenAI filed a response to the lawsuits , claiming that “the use of copyrighted materials by innovators in transformative ways does not violate copyright”.
In the UK, the government has backtracked on an initial proposal to allow AI developers free use of copyrighted books and music for training AI models. The exemption was raised by the Intellectual Property Office in June 2022 but ministers have since rowed back on it. In a report published on Wednesday, MPs said the handling of the exemption proposal showed a “clear lack of understanding of the needs of the UK’s creative industries”.
The letter from the publishers’ trade body said the UK’s “world-leading” creative industries should be supported in parallel with AI development. It pointed to research that estimated the publishing industry to be worth £7bn to the UK economy , while employing 70,000 people and supporting hundreds of thousands of authors.
“This government has rightly recognised the huge growth potential of the creative and tech sectors and that is best achieved as equal partners. We hope you will consider our request and support your relevant government departments in taking action that will put in place the right business conditions for AI development in the UK,” wrote Dan Conway, chief executive of the Publishers Association.
Sign up to TechScape Free weekly newsletter Alex Hern's weekly dive in to how technology is shaping our lives Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy.
We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion A government spokesperson said ministers were committed to a “balanced and pragmatic” approach to the use of AI in the creative industries.
“To support this, the Intellectual Property Office is working with AI firms and rights holders to produce an agreement and guidance on copyright. This supports our ambition to make the UK a world leader in AI research and development, while making sure our copyright framework continues to promote and reward innovation and investment in the UK’s creative industries.” Explore more on these topics Publishing Intellectual property ChatGPT OpenAI Chatbots Artificial intelligence (AI) Sarah Silverman news Most viewed Most viewed Film Books Music Art & design TV & radio Stage Classical Games News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
302 | 2,023 | "AI firms must be held responsible for harm they cause, ‘godfathers’ of technology say | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/oct/24/ai-firms-must-be-held-responsible-for-harm-they-cause-godfathers-of-technology-say" | "Authors and academics also warn development of advanced systems ‘utterly reckless’ without safety checks US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness The scales of justice transform into pixelated data.
Composite: Guardian Design/Getty Images The scales of justice transform into pixelated data.
Composite: Guardian Design/Getty Images Artificial intelligence (AI) AI firms must be held responsible for harm they cause, ‘godfathers’ of technology say Authors and academics also warn development of advanced systems ‘utterly reckless’ without safety checks Global technology editor Tue 24 Oct 2023 01.00 EDT Powerful artificial intelligence systems threaten social stability and AI companies must be made liable for harms caused by their products, a group of senior experts including two “godfathers” of the technology has warned.
Tuesday’s intervention was made as international politicians, tech companies, academics and civil society figures prepare to gather at Bletchley Park next week for a summit on AI safety.
A co-author of the policy proposals from 23 experts said it was “utterly reckless” to pursue ever more powerful AI systems before understanding how to make them safe.
“It’s time to get serious about advanced AI systems,” said Stuart Russell, professor of computer science at the University of California, Berkeley. “These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.” He added: “There are more regulations on sandwich shops than there are on AI companies.” The document urged governments to adopt a range of policies, including: Governments allocating one-third of their AI research and development funding, and companies one-third of their AI R&D resources, to safe and ethical use of systems.
Giving independent auditors access to AI laboratories.
Establishing a licensing system for building cutting-edge models.
AI companies must adopt specific safety measures if dangerous capabilities are found in their models.
Making tech companies liable for foreseeable and preventable harms from their AI systems.
Other co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI” , who won the ACM Turing award – the computer science equivalent of the Nobel prize – in 2018 for their work on AI.
Both are among the 100 guests invited to attend the summit. Hinton resigned from Google this year to sound a warning about what he called the “existential risk” posed by digital intelligence while Bengio, a professor of computer science at the University of Montreal, joined him and thousands of other experts in signing a letter in March calling for a moratorium in giant AI experiments.
Other co-authors of the proposals include the bestselling author of Sapiens, Yuval Noah Harari, Daniel Kahneman, a Nobel laureate in economics, and Sheila McIlraith, a professor in AI at the University of Toronto, as well as award-winning Chinese computer scientist Andy Yao.
Sign up to TechScape Free weekly newsletter Alex Hern's weekly dive in to how technology is shaping our lives Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy.
We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion The authors warned that carelessly developed AI systems threaten to “amplify social injustice, undermine our professions, erode social stability, enable large-scale criminal or terrorist activities and weaken our shared understanding of reality that is foundational to society.” They warned that current AI systems were already showing signs of worrying capabilities that point the way to the emergence of autonomous systems that can plan, pursue goals and “act in the world”. The GPT-4 AI model that powers the ChatGPT tool, which was developed by the US firm OpenAI, has been able to design and execute chemistry experiments, browse the web and use software tools including other AI models, the experts said.
“If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals”, adding that “we may not be able to keep them in check”.
Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviours.
The safety summit next week will focus on existential threats posed by AI, such as aiding the development of novel bioweapons and evading human control. The UK government is working with other participants on a statement that is expected to underline the scale of the threat from frontier AI – the term for advanced systems. However, while the summit will outline the risks from AI and measures to combat the threat, it is not expected to formally establish a global regulatory body.
Some AI experts argue that fears about the existential threat to humans are overblown. The other co-winner of the 2018 Turing award alongside Bengio and Hinton, Yann LeCun, now chief AI scientist at Mark Zuckerberg’s Meta and who is also attending the summit, told the Financial Times that the notion AI could exterminate humans was “preposterous”.
Nonetheless, the authors of the policy document have argued that if advanced autonomous AI systems did emerge now, the world would not know how to make them safe or conduct safety tests on them. “Even if we did, most countries lack the institutions to prevent misuse and uphold safe practices,” they added.
Explore more on these topics Artificial intelligence (AI) Computing Cybercrime Internet ChatGPT news More on this story More on this story Sam Altman ‘was working on new venture’ before sacking from OpenAI 2h ago John Legend and Sia among singers to trial AI versions of voices with YouTube 3d ago Like horses laid off by the car: BT tech chief’s AI job losses analogy draws anger 9 Nov 2023 AI could cause ‘catastrophic’ financial crisis, says Yuval Noah Harari 9 Nov 2023 ‘A kind of magic’: Peter Blake says possibilities of AI are endless for art 5 Nov 2023 Elon Musk unveils Grok, an AI chatbot with a ‘rebellious streak’ 5 Nov 2023 No utopia: experts question Elon Musk’s vision of world without work 3 Nov 2023 ‘Bletchley made me more optimistic’: how experts reacted to AI summit 3 Nov 2023 AI could pose risk to humanity on scale of nuclear war, Sunak warns 2 Nov 2023 When Musk met Sunak: the prime minister was more starry-eyed than a SpaceX telescope 3 Nov 2023 … … Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
303 | 2,023 | "Incredibly smart or incredibly stupid? What we learned from using ChatGPT for a year | ChatGPT | The Guardian" | "https://www.theguardian.com/technology/2023/oct/12/chatgpt-uses-writing-recipes-one-year" | "As the tool becomes less of a curiosity and more a part of daily life, fans are finding clever uses – and discovering limitations US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness ChatGPT was released in late November last year.
Photograph: Costfoto/NurPhoto/Shutterstock ChatGPT was released in late November last year.
Photograph: Costfoto/NurPhoto/Shutterstock ChatGPT Incredibly smart or incredibly stupid? What we learned from using ChatGPT for a year As the tool becomes less of a curiosity and more a part of daily life, fans are finding clever uses – and discovering limitations Thu 12 Oct 2023 13.39 EDT Next month ChatGPT will celebrate its first birthday – marking a year in which the chatbot, for many, turned AI from a futuristic concept to a daily reality.
Its universal accessibility has led to a host of concerns, from job losses to disinformation to plagiarism. Over the same period, tens of millions of users have been investigating what the platform can do to make their lives just a little bit easier.
Upon its release, users quickly embraced ChatGPT’s potential for silliness, asking it to play 20 questions or write its own songs. As its first anniversary approaches, people are using it for a huge range of tasks. We’ve all heard about uses like crafting emails, writing student essays and penning cover letters. But with the right prompts, it can take on jobs that are more esoteric but equally useful in everyday life. Here are a few that might come in handy.
Jargon demystifier You’re at a work meeting, and the accountants are talking about GAAP operating income for Q4 of FY22, the design people are panicked about kerning, and the CEO wants you to circle back to drill down on some pain points. On top of that, your British boss says your work is “quite good” but strangely doesn’t seem happy with it, while your US colleague claims everything anyone has ever done is amazing. Users say they’ve turned to ChatGPT for help as an intermediary , employing it to translate workplace jargon so everyone’s on the same page about the concerns you flagged, tnx.
This isn’t limited to the office: people have used ChatGPT to, for instance, translate a sleep study ’s medical terminology, or help craft a legal opinion.
It can serve as an intergenerational go-between: users have turned it into a gen Z slang translator (sample sentence from a description of a key historical event : “Titanic, flexing as the unsinkable chonk, sets sail with mad swag, a boatload of peeps, and the vibes of a 1912 rave”).
Pitiless critic Sometimes you want a real critique of your work, a harsh assessment that your friends and family are too nice to provide. For some, ChatGPT is that critic (though whether the word “real” applies here is debatable). “I use ChatGPT to brutally audit where my copy is falling short of the target audience’s expectations,” a copywriter wrote on Reddit.
Some have even found it can give decent (if imperfect) criticism of fiction writing , pointing out redundancies, missing characterization or weak imagery.
There are, of course, ethical questions about the use of ChatGPT in work and school settings. In response, some argue that asking it to be your critic, and learning from its feedback, is a way to improve your writing without letting it put words in your mouth.
It’s not always an easy task: what it gives you depends entirely on how you structure the prompt. Some users find it tough to find the language to “convince” it to be harsh enough. And you’ll get more appropriate feedback if you give it a detailed task – “give me feedback” might not help as much as “I’m writing an essay for college – tell me whether it’s well-structured and clear”.
Robot with feelings Maybe you don’t want ChatGPT to be mean – maybe you want the opposite. Users have asked ChatGPT for help being nicer in their work emails, especially when they’re secretly fuming. “I write to it: please make me sound like less of an asshole,” said one user.
Sous chef It’s dinnertime and there’s stuff in the kitchen – but you have no idea what to do with a half-eaten yogurt, a leftover chicken leg, a bag of flour and some forgotten tomatoes on the verge of becoming truly upsetting. Users report that ChatGPT has helped them create impressive meals out of what they have , or come up with ideas based on what’s around and a specified grocery budget.
Many users report being pleased with the results, though some recipes sound perhaps too creative: garbanzo bean and cheddar cheese soup , a peanut butter and Nutella quesadilla , and a “carrot and grape salad with muesli crunch” (based on what’s in my own kitchen).
ChatGPT invents an odd recipe.
Last month, OpenAI, the tool’s developer, added an image-recognition feature that makes this task even easier – instead of having to list ingredients, users can take photos of the food in their cabinets and ChatGPT will come up with recipes.
Sign up to First Thing Free daily newsletter Our US morning briefing breaks down the key stories of the day, telling you what’s happening and why it matters Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy.
We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion Results have been mixed. Beyond the fact that the bot has no taste buds, some users have expressed safety concerns, saying ChatGPT may, for example, convince inexperienced chefs to undercook meat.
Whiteboard interpreter Following the update allowing ChatGPT to “see”, users have found its interpretation skills to be alarmingly impressive. In a clip making the rounds, an AI developer, Mckay Wrigley, shows it a hand-drawn flowchart on a whiteboard, which it’s able to turn into code that Wrigley runs – and it works. The platform can even tell that the green arrows indicate the steps should be reordered. So you can stop beating yourself up for never having learned to code.
You can give ChatGPT a picture of your team’s whiteboarding session and have it write the code for you.
This is absolutely insane.
pic.twitter.com/bGWT5bU8MK Speedy summarizer ChatGPT can act as your personal SparkNotes, condensing large quantities of information into small ones – whether that information is in the form of articles, meeting notes or book chapters. Combined with the right browser plugin, it can even summarize entire YouTube videos so you don’t have to listen to an insufferable Ted Talker.
Some users have found it goes overboard with summaries, even making them longer than the original text. Others say clever prompts, such as “be my secretary and act as though you were taking the minutes of a meeting”, seem to help.
It’s important to remember that while ChatGPT can seem incredibly smart, it is also incredibly stupid, as this index of some of its many failures proves. It has struggled to count the number of N’s in “banana”, failed to correctly answer its own riddle and agreed that 1+0.9 makes 1.8.
Far more dangerously, it makes up “facts” – such as a sexual harassment scandal that didn’t happen , starring a real professor.
You’re a human, it’s a bot – take it all with a big grain of salt. Or vinegar, which it recommends as a substitute.
Explore more on these topics ChatGPT OpenAI features Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
304 | 2,023 | "When it comes to creative thinking, it’s clear that AI systems mean business | John Naughton | The Guardian" | "https://www.theguardian.com/commentisfree/2023/sep/23/chatbots-ai-gpt-4-university-students-creativity" | "The chatbot GPT-4 has produced more viable commercial ideas more efficiently and more cheaply than US university students US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian view Columnists Letters Opinion videos Cartoons A head for business: OpenAI’s GPT-4 chatbot outsmarted US university students in an experiment aimed at creating a product that would retail for less than $50.
Photograph: Jaap Arriens/NurPhoto/REX/Shutterstock A head for business: OpenAI’s GPT-4 chatbot outsmarted US university students in an experiment aimed at creating a product that would retail for less than $50.
Photograph: Jaap Arriens/NurPhoto/REX/Shutterstock The Observer Artificial intelligence (AI) When it comes to creative thinking, it’s clear that AI systems mean business The chatbot GPT-4 has produced more viable commercial ideas more efficiently and more cheaply than US university students I n all the frenzied discourse about large language models (LLMs) such as GPT-4 there is one point on which everyone seems to agree: these models are essentially stochastic parrots – namely, machines that are good at generating convincing sentences, but do not actually understand the meaning of the language they are processing. They have somehow “read” (that is, ingested) everything ever published in machine-readable form and create sentences word by word , at each point making a statistical guess of “what one might expect someone to write after seeing what people have written on billions of webpages, etc”. That’s it! Ever since ChatGPT arrived last November , people have been astonished by the capabilities of these parrots – how humanlike they seem to be and so on. But consolation was drawn initially from the thought that since the models were drawing only on what already resided in their capacious memories, then they couldn’t be genuinely original: they would just regurgitate the conventional wisdom embedded in their training data. That comforting thought didn’t last long, though, as experimenters kept finding startling and unpredictable behaviours of LLMs – facets now labelled “ emergent abilities ”.
From the beginning, many people have used LLMs as aids to brainstorming. Ask one of them for five ways to reduce your household’s carbon footprint and it’ll come up with a list of reasonable and actionable suggestions. So it’s clear that the combination of human plus LLM can be a creative partnership. But of course what we’d really like to know is whether the machines on their own are capable of creativity? Ah, but isn’t creativity a slippery concept – something that’s hard to define but that we nevertheless recognise when we see it? That hasn’t stopped psychologists from trying to measure it, though, via tools such as the alternative uses test and the similar Torrance test.
And it turns out that one LLM – GPT-4 – beats 91% of humans on the former and 99% of them on the latter. So as the inveterate artificial intelligence user Ethan Mollick puts it : “We are running out of creativity tests that AIs cannot ace.” Mollick works in a business school (Wharton, based at the University of Pennsylvania) and has been a cheerleader for LLMs from the beginning. Some of his colleagues conducted an experiment with GPT-4 and 200 of their students, setting humans and machine the same challenge: come up with an idea for a product aimed at American college students that would retail for less than $50.
And the results? “ChatGPT-4 generated more, cheaper and better ideas than the students. Even more impressive, from a business perspective, was that the purchase intent from outside judges was higher for the AI-generated ideas as well! Of the 40 best ideas rated by the judges, 35 came from ChatGPT.
” The really illuminating aspect of the study, though, was an inference drawn from it by the researchers about the economics of it. “A professional working with ChatGPT-4,” they write, “can generate ideas at a rate of about 800 ideas per hour. At a cost of $500 per hour of human effort, a figure representing an estimate of the fully loaded cost of a skilled professional, ideas are generated at a cost of about $0.63 each… At the time we used ChatGPT-4, the API fee [application programming interface, which allows two or more computer programs to communicate with each other] for 800 ideas was about $20. For that same $500 per hour, a human working alone, without assistance from an LLM, only generates 20 ideas at a cost of roughly $25 each… For the focused idea generation task itself, a human using ChatGPT-4 is thus about 40 times more productive than a human working alone.” If you wanted an insight about how corporations will view this technology, then you couldn’t do better than this. Reading it brought to mind Ted Chiang’s perceptive New Yorker essay about how AI would in fact be used. “I suggest,” he wrote, “that we think about AI as a management consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and AI systems are used for many reasons, too. But the similarities between McKinsey – a consulting firm that works with 90% of the Fortune 100 – and AI are also clear.” Chiang quotes a former McKinsey employee’s description of the consultancy as “capital’s willing executioners”. If you’re a senior executive who has to take some unpalatable decisions but needs plausible deniability, being able to cite an external consultant – or a new technology? – is a good way to do it. So, says Chiang, as AI becomes more powerful and flexible, the question we should be asking is: is there any way to keep it from being another version of McKinsey? You only have to ask the question to know the answer.
What I’ve been reading Deutsche courage Just for Fun is a lovely essay by Rebecca Baumgartner on the 3 Quarks Daily platform about people’s reaction to the news that she’s learning German – for fun! Hobbes nobbing AI and Leviathan: Part II is No 2 in a remarkable series of essays by Samuel Hammond on his Second Best blog.
Man of many words Henry Oliver’s essay on Substack’s Common Reader blog – Samuel Johnson, Opsimath – is a nice tribute to the Great Cham.
Explore more on these topics Artificial intelligence (AI) Opinion ChatGPT Chatbots Computing comment Most viewed Most viewed The Guardian view Columnists Letters Opinion videos Cartoons News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
305 | 2,023 | "Is AI more creative than the human brain? I doubt it – and I definitely want humans to stay in charge | Stefan Stern | The Guardian" | "https://www.theguardian.com/commentisfree/2023/oct/22/ai-more-creative-humans-in-charge-inspiration" | "Businesses keep trying to prove AI’s superior creativity, but haven’t proved it can compete with human inspiration US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian view Columnists Letters Opinion videos Cartoons ‘We stared up at the famous image of a languorous Adam reaching out his forefinger to that belonging to benign, bearded God.’ Photograph: Peter Barritt/Alamy ‘We stared up at the famous image of a languorous Adam reaching out his forefinger to that belonging to benign, bearded God.’ Photograph: Peter Barritt/Alamy Opinion Artificial intelligence (AI) Is AI more creative than the human brain? I doubt it – and I definitely want humans to stay in charge Businesses keep trying to prove AI’s superior creativity, but haven’t proved it can compete with human inspiration Sun 22 Oct 2023 08.00 EDT P rove you’re not a robot. It’s (fairly) easy if you try. You could scroll down or click the little x in the corner of the screen to get rid of me. If you are reading the print edition you could just turn the page.
One of the indignities of the digital age is being asked, constantly, to confirm we are who we say we are, that we are indeed a human being. Something feels slightly amiss when the (non-human) technology demands that we convince it that we are not the same as them. Big (and sometimes overexcited) claims are being made for artificial intelligence, the most recent being the claim from Wharton business school in Philadelphia that ChatGPT is more creative than human beings (well, more creative than MBA students, anyway).
Students and AI were challenged to come up with ideas for new, cheap products. When potential customers were surveyed online, the products suggested by AI seemed to be more popular. They had certainly been dreamed up much more quickly and in larger numbers than the ideas put forward by mere humans.
Digging into the research, however, caused this particular human being to experience a jolt of scepticism. In a footnote, the researchers concede there are concerns that AI is being used to provide answers for these online consumer panels. Are robots passing judgment on robots? “We believe that we were indeed surveying humans,” the researchers say.
When I looked at the list of “new” products – “multifunctional desk organiser”, “noise-cancelling headphones”, “compact printer” – they did not exactly scream “innovation”. Indeed, the researchers admit the ideas produced by the students scored higher for novelty. But they dismissed the idea that novelty was necessarily an advantage in new product creation.
That may be so. How many new stories did Shakespeare come up with? The Renaissance was in part a conscious attempt to imitate and recreate the art of antiquity. Originality is a slippery concept, as any good intellectual property lawyer will tell you – for a fee.
The Wharton researchers try not to over-claim. AI could become “a creative co-pilot”, they say. “Together, you can become a more innovative team.” The tech writer Kate Bevan agrees with that last point. You can use AI as “a way to express creativity, but AI itself is not creative”, she told me.
The question has been a live one in Hollywood, where the 148-day strike by the Writers Guild of America forced studio bosses to acknowledge the unique contribution that only human beings can make.
The union won concessions. Writers will benefit if their productivity is enhanced by AI. But AI will not be used to replace them. As Adam Seth Litwin, associate professor of industrial and labour relations at Cornell University, explained in a piece for the New York Times , the studios can use AI “to generate a first draft, but the writers to whom they deliver it get the credit”. The human hand, and brain, matter.
Last summer, I visited Rome and spent a wonderful few hours in the Vatican galleries, ending up with a few minutes in the Sistine Chapel. I gawped up at the ceiling, as so many millions of people have done in the five centuries since the decorator finished his work there in 1512. The “decorator” Michelangelo.
In his poem Long-legged Fly, Yeats imagines the artist working away up there on the scaffolding: “With no more sound than the mice make / His hand moves to and fro.” We stared up at the famous image of a languorous Adam reaching out his forefinger to that belonging to benign, bearded God. And, of course, I bought the print of that detail on our way out, which is now on my kitchen wall at home. I look at it every day.
You don’t have to believe in a divine spark. But when Adam reaches out like that he is, I think, doing what all of us try to do, one way or another, every day. He is trying to be creative, to be human. He is seeking inspiration. He is not a robot. He is something much better than that: infinite, full of potential, unpredictable.
The new technology is great: exciting, powerful, also full of potential. But I think we living things ought to remain in charge. On behalf of humanity, I would respectfully ask some of the more overexcited tech bros: prove you’re not the idiot.
Stefan Stern is co-author of Myths of Management and the former director of the High Pay Centre Explore more on these topics Artificial intelligence (AI) Opinion ChatGPT Computing Consciousness Neuroscience comment More on this story More on this story Sam Altman ‘was working on new venture’ before sacking from OpenAI 2h ago John Legend and Sia among singers to trial AI versions of voices with YouTube 3d ago Like horses laid off by the car: BT tech chief’s AI job losses analogy draws anger 9 Nov 2023 AI could cause ‘catastrophic’ financial crisis, says Yuval Noah Harari 9 Nov 2023 ‘A kind of magic’: Peter Blake says possibilities of AI are endless for art 5 Nov 2023 Elon Musk unveils Grok, an AI chatbot with a ‘rebellious streak’ 5 Nov 2023 No utopia: experts question Elon Musk’s vision of world without work 3 Nov 2023 ‘Bletchley made me more optimistic’: how experts reacted to AI summit 3 Nov 2023 AI could pose risk to humanity on scale of nuclear war, Sunak warns 2 Nov 2023 When Musk met Sunak: the prime minister was more starry-eyed than a SpaceX telescope 3 Nov 2023 … … Most viewed Most viewed The Guardian view Columnists Letters Opinion videos Cartoons News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
306 | 2,023 | "Machine learning: could ChatGPT become just another tool for Australia’s year 12 – like Wikipedia? | Australian education | The Guardian" | "https://www.theguardian.com/australia-news/2023/oct/09/chatgpt-ai-chatbots-in-schools-australia-measures-benefits-impacts" | "An International Baccalaureate student says she and her peers find the AI platform helpful for brainstorming – but that doesn’t mean safeguards aren’t needed "https://www.theguardian.com/australia-news/live/2023/oct/09/australia-news-live-penny-wong-israel-commonwealth-games-inquiry-referendum-indigenous-voice-to-parliament-labor-victoria-nsw-sa-plane-crash-queensland\">Follow our Australia news live blog for latest updates Get our "https://www.theguardian.com/email-newsletters?CMP=cvau_sfl\">morning and afternoon news emails , "https://app.adjust.com/w4u7jx3\">free app or "https://www.theguardian.com/australia-news/series/full-story?CMP=cvau_sfl\">daily news podcast US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing World Europe US Americas Asia Australia Middle East Africa Inequality Global development Trinity Meachem is a year 12 International Baccalaureate high school student at Wesley College in Melbourne. She and her peers have been using ChatGPT to generate ideas for assignments.
Photograph: Christopher Hopkins/The Guardian Trinity Meachem is a year 12 International Baccalaureate high school student at Wesley College in Melbourne. She and her peers have been using ChatGPT to generate ideas for assignments.
Photograph: Christopher Hopkins/The Guardian Australian education Machine learning: could ChatGPT become just another tool for Australia’s year 12 – like Wikipedia? An International Baccalaureate student says she and her peers find the AI platform helpful for brainstorming – but that doesn’t mean safeguards aren’t needed Follow our Australia news live blog for latest updates Get our morning and afternoon news emails , free app or daily news podcast Sun 8 Oct 2023 19.30 EDT As ChatGPT hit headlines last summer, schools and education providers began panicking about how to handle the emerging artificial intelligence platform.
Some Australian states and territories temporarily banned the technology amid plagiarism concerns. But International Baccalaureate – which offers education programs around the world – took a different approach.
In March IB released a statement confirming it wouldn’t ban the use of ChatGPT in its curriculum.
“Artificial intelligence (AI) technology will become part of our everyday lives,” it said. “We, therefore, need to adapt and transform our educational programs and assessment practices so that students can use these new AI tools ethically and effectively.” Last week education ministers agreed to a draft framework guiding the responsible use of artificial intelligence in Australia’s schools from term 1 in 2024.
It means this year’s public school cohort will be the last to have navigated their high school exams without the technology.
One of the few who has been allowed to use the technology at school, year 12 IB student Trinity Meachem, has found ChatGPT a huge help – and she will continue using it as she heads into her final assessments.
She and her peers haven’t been using ChatGPT to cheat – instead they’ve been using it to generate ideas for assignments.
The International Baccalaureate has taken a nuanced approach to artificial technology.
“It’s a big help for brainstorming and thinking outside the box,” the Wesley student says. “It can be used as a good starting point to come up with conceptual things to write about in reports.” Since the IB was introduced to Australia in 1978, 208 private schools have taken up programs, including 80 that run its year 11 and 12 diplomas.
Last year 2,421 students sat IB examinations, compared with 75,493 students who sat the HSC and 90,780 for VCE.
Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup Unlike the VCE’s school-assessed coursework or the HSC, students studying the IB do a number of internal assessments, generally in the form of reports or oral exams, that promote self-driven learning. The rest, between 50% and 80%, comes down to the end-of-year exams, which covers two years of content and is assessed by the students’ teachers.
Final results rank between 24 and 45, which are then converted to an Atar equivalent , allowing IB diploma-holders to enrol in Australian universities.
As Meachem heads into her final exams, she sees ChatGPT as yet another addition to the toolkit.
“There’s a lot more online collaboration now than in the past,” she says, pointing to collaborative note-taking tools such as OneNote.
The head of IB World Schools, Stuart Jones, says the sense of student agency, digital innovation and broad assessment base is why the IB has taken a nuanced approach to artificial technology.
Sign up to Afternoon Update Free daily newsletter Our Australian afternoon update breaks down the key stories of the day, telling you what’s happening and why it matters Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy.
We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion “It’s changing so fast,” Jones says. “The response is not bombing things, the response is working with it.” But that doesn’t mean no safeguards. The IB has strict zero tolerance plagiarism rules and uses Turnitin, an online checker, to monitor coursework.
Any quote or material generated from AI in assignments has to be credited and referenced in a student’s bibliography. Unreferenced work produced by AI tools isn’t considered a student’s own.
Come exam time, notes, mobile phones and other IT equipment are banned, minimising the risk of cheating.
Trinity Meachem on ChatGPT: ‘It can be used as a good starting point to come up with conceptual things to write about in reports.’ “If a student suddenly turns up with this masterpiece, it should ring alarm bells,” Jones says.
“You need a whole-school approach, where teacher knowledge of the student is part of the process of encouraging students to avoid academic malpractice.
“When you look at why students cheat, a lot of the time they don’t know the rules. They don’t know the difference between collaboration and collusion. They need to be taught how to research and reference.” Jones points to Wikipedia, which initially “horrified” schools and has now become a useful beginning tool to frame future research.
“What it pushed schools to do was to say, ‘OK, how can we help students to research? How can we develop those information literacy skills?’ We’ve thought pretty much in that same vein with ChatGPT.” Meachem is completing maths, English, psychology, chemistry, biology and German, with the aim of getting in to the University of Melbourne and studying a Bachelor of Science.
“It’s quite challenging,” she says. “Unlike VCE you’ve got two years’ worth of content to remember.
“But I like that it’s self-led – you design your own experiments, run your own studies … you need to be organised.” Asked how she’s feeling about the final exams, she replies: “stressed” – “pretty standard for any year 12 student”.
“We only finished learning high level content in the last lesson of term so it’s a quick turnaround to revise,” she says.
“But sport has helped, taking a break from the mindset of constantly studying when you get home. Knowing when to cut yourself some slack is a big one … year 12 is hard enough to get through without balance.” Explore more on these topics Australian education ChatGPT Artificial intelligence (AI) features Most viewed Most viewed World Europe US Americas Asia Australia Middle East Africa Inequality Global development News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
307 | 2,023 | "The apocalypse isn’t coming. We must resist cynicism and fear about AI | Stephen Marche | The Guardian" | "https://www.theguardian.com/commentisfree/2023/may/15/artificial-intelligence-cynicism-technology" | "Remember when WeWork would kill commercial real estate? Crypto would abolish banks? The metaverse would end meeting people in real life? US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian view Columnists Letters Opinion videos Cartoons ‘Computers do not have will. Algorithms are a series of instructions.’ Photograph: Dmitrii Kotin/Alamy ‘Computers do not have will. Algorithms are a series of instructions.’ Photograph: Dmitrii Kotin/Alamy Opinion Artificial intelligence (AI) The apocalypse isn’t coming. We must resist cynicism and fear about AI Remember when WeWork would kill commercial real estate? Crypto would abolish banks? The metaverse would end meeting people in real life? Mon 15 May 2023 04.06 EDT I n the field of artificial intelligence, doomerism is as natural as an echo. Every development in the field, or to be more precise every development that the public notices, immediately generates an apocalyptic reaction. The fear is natural enough; it comes partly from the lizard-brain part of us that resists whatever is new and strange, and partly from the movies, which have instructed us, for a century, that artificial intelligence will take the form of an angry god that wants to destroy all humanity.
The recent public letter calling for a six-month ban on AI lab work will not have the slightest measurable effect on the development of artificial intelligence, it goes without saying. But it has changed the conversation: every discussion about artificial intelligence must begin with the possibility of total human extinction. It’s silly and, worse, it’s an alibi, a distraction from the real dangers technology presents.
The most important thing to remember about tech doomerism in general is that it’s a form of advertising, a species of hype. Remember when WeWork was going to end commercial real estate? Remember when crypto was going to lead to the abolition of central banks? Remember when the metaverse was going to end meeting people in real life? Silicon Valley uses apocalypse for marketing purposes: they tell you their tech is going to end the world to show you how important they are.
I have been working with and reporting on AI since 2017, which is prehistoric in this field. During that time, I have heard, from intelligent sources who were usually reliable, that the trucking industry was about to end, that China was in possession of a trillion-parameter natural language processing AI with superhuman intelligence. I have heard geniuses – bona fide geniuses – declare that medical schools should no longer teach radiology because it would all be automated soon.
One of the reasons AI doomerism bores me is that it’s become familiar – I’ve heard it all before. To stay sane, I have had to abide by twin principles: I don’t believe it until I see it. Once I see it, I believe it.
Many of the most important engineers in the field indulge in AI doomerism; this is unquestionably true. But one of the defining features of our time is that the engineers – who do not, in my experience, have even the faintest education in the humanities or even recognize that society and culture are worthy of study – simply have no idea how their inventions interact with the world. One of the most prominent signatories of the open letter was Elon Musk, an early investor in OpenAI. He is brilliant at technology. But if you want to know how little he understands about people and their relationships to technology, go on Twitter for five minutes.
Not that there aren’t real causes of worry when it comes to AI; it’s just that they’re almost always about something other than AI. The biggest anxiety – that an artificial general intelligence is about to take over the world – doesn’t even qualify as science fiction. That fear is religious.
Computers do not have will. Algorithms are a series of instructions. The properties that emerge in the “emergent properties” of artificial intelligence have to be discovered and established by human beings. The anthropomorphization of statistical pattern-matching machinery is storytelling; it’s a movie playing in the collective mind, nothing more. Turning off ChatGPT isn’t murder. Engineers who hire lawyers for their chatbots are every bit as ridiculous as they sound.
The much more real anxieties – brought up by the more substantial critics of artificial intelligence – are that AI will super-charge misinformation and will lead to the hollowing out of the middle class by the process of automation. Do I really need to point out that both of these problems predate artificial intelligence by decades, and are political rather than technological? AI might well make it slightly easier to generate fake content, but the problem of misinformation has never been generation but dissemination. The political space is already saturated with fraud and it’s hard to see how AI could make it much worse. In the first quarter of 2019, Facebook had to remove 2.2bn fake profiles; AI had nothing to do with it. The response to the degradation of our information networks – from government and from the social media industry – has been a massive shrug, a bunch of antiquated talk about the first amendment.
Regulating AI is enormously problematic; it involves trying to fathom the unfathomable and make the inherently opaque transparent. But we already know, and have known for over a decade, about the social consequences of social media algorithms. We don’t have to fantasize or predict the effects of Instagram. The research is consistent and established: that technology is associated with higher levels of depression, anxiety and self-harm among children. Yet we do nothing. Vague talk about slowing down AI doesn’t solve anything; a concrete plan to regulate social media might.
As for the hollowing out of the middle class, inequality in the United States reached the highest level since 1774 back in 2012. AI may not be the problem. The problem may be the foundational economic order AI is entering. Again, vague talk about an AI apocalypse is a convenient way to avoid talking about the self-consumption of capitalism and the extremely hard choices that self-consumption presents.
The way you can tell that doomerism is just more hype is that its solutions are always terminally vague. The open letter called for a six-month ban. What, exactly, do they imagine will happen over those six months? The engineers won’t think about AI? The developers won’t figure out ways to use it? Doomerism likes its crises numinous, preferably unsolvable. AI fits the bill.
Recently, I used AI to write a novella: The Death of an Author. I won’t say that the experience wasn’t unsettling. It was quite weird, actually. It felt like I managed to get an alien to write, an alien that is the sum total of our language. The novella itself has, to me anyway, a hypnotic but removed power – inhuman language that makes sense. But the experience didn’t make me afraid. It awed me. Let’s reside in the awe for a moment, just a moment, before we go to the fear.
If we have to think through AI by way of the movies, can we at least do Star Trek instead of Terminator 2? Something strange has appeared in the sky – let’s be a little more Jean-Luc Picard and a little less Klingon in our response. The truth about AI is that nobody – not the engineers who have created it, not the developers converting it into products – understands fully what it is, never mind what its consequences will be. Let’s get a sense of what this alien is before we blow it out of the sky. Maybe it’s beautiful.
Stephen Marche is a Canadian essayist and novelist. He is the author of The Next Civil War and How Shakespeare Changed Everything Explore more on these topics Artificial intelligence (AI) Opinion Computing Consciousness comment Most viewed Most viewed The Guardian view Columnists Letters Opinion videos Cartoons News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
308 | 2,023 | "‘Design me a chair made from petals!’: The artists pushing the boundaries of AI | Art and design | The Guardian" | "https://www.theguardian.com/artanddesign/2023/may/15/design-me-a-chair-made-from-petals-the-artists-pushing-the-boundaries-of-ai" | "From restoring artefacts destroyed by Isis to training robot vacuum cleaners, architects, artists and game developers are discovering the potential – and pitfalls – of the virtual world US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing Film Books Music Art & design TV & radio Stage Classical Games A vision of blossomy luxury … Andrés Reisinger’s Hortensia chair.
Photograph: © Andrés Reisinger A vision of blossomy luxury … Andrés Reisinger’s Hortensia chair.
Photograph: © Andrés Reisinger Art and design ‘Design me a chair made from petals!’: The artists pushing the boundaries of AI From restoring artefacts destroyed by Isis to training robot vacuum cleaners, architects, artists and game developers are discovering the potential – and pitfalls – of the virtual world Mon 15 May 2023 02.00 EDT A shower of pink petals rains down in slow motion against an ethereal backdrop of minimalist white arches, bathed in the soft focus of a cosmetics advert. The camera pulls back to reveal the petals have clustered together to form a delicate puffy armchair, standing in the centre of a temple-like space, surrounded by a dreamy landscape of fluffy pink trees. It looks like a luxury zen retreat, as conceived by Glossier.
The aesthetic is eerily familiar: these are the pastel tones, tactile textures and ubiquitous arches of Instagram architecture , an amalgamation of design tropes specifically honed for likes. An ode to millennial pink, this computer-rendered scene has been finely tuned to seduce the social media algorithm, calibrated to slide into your feed like a sugary tranquilliser, promising to envelop you in its candy-floss embrace.
What makes it different from countless other such CGI visions that populate the infinite scroll is that this implausible chair now exists in reality. In front of the video, on show in the Museum of Applied Arts in Vienna (MAK), stands the Hortensia chair , a vision of blossomy luxury plucked from the screen and fabricated from thousands of laser-cut pink fabric petals – yours for about £5,000.
Suspended like a fly in amber … Morehshin Allahyari’s Gorgon, 2016.
It is the work of digital artist Andrés Reisinger, who minted the original digital chair design as an NFT after his images went viral on Instagram in 2018. He was soon approached by collectors asking where they could buy the real thing, so he decided to make it – with the help of product designer Júlia Esqué and furniture brand Moooi – first as a limited edition, and now adapted for serial production. It was the first time that an armchair had been willed into being by likes and shares, a physical product spawned from the dark matter of the algorithm.
It is one of many such projects that occupy the slippery realm between the virtual and the real in the MAK’s new exhibition, /imagine: A Journey Into the New Virtual.
It takes its title from the command that users input intoAI software Midjourney , to create their own unearthly visions – a tool that has since rendered the technical skills of digital artists such as Reisinger all but useless. Midjourney could generate a pink petal chair in seconds and give you several alternatives while it’s at it. For the anodyne marketing blurb, look no further than ChatGPT.
Given the pace at which such technologies are developing, it is an ambitious subject for the comparatively slow-moving beast of a state-owned museum to tackle. But the curators, Bika Rebek and Marlies Wirth, have done an admirable job of assembling an accessible snapshot of the last decade of forays into the virtual realm, ranging from designers who have gleefully embraced the promise of the metaverse, to those sounding alarm bells about the direction we are heading in.
Opening up new perspectives on archaeological heritage … Miriam Hillawi Abraham’s game.
In the latter category, Iranian artist Morehshin Allahyari presents a series of Assyrian artefacts that were destroyed by Islamic State, which she has digitally reconstructed from photographs and 3D-printed in translucent plastic. Each contains a thumb-drive, suspended like a fly in amber, containing maps, videos and information about the destroyed artefacts, like digital time capsules. In an accompanying video lecture, Physical Tactics for Digital Colonialism , Allahyari describes the violence of IS and the more hidden violence of western big tech. By digitally appropriating and profiting from scans of historical objects and sites, without considering who that data should belong to and how it should be distributed, are the likes of Google guilty of a new form of digital colonialism? In a similar vein, a screen nearby shows snippets from a virtual reality video game developed by Ethiopian designer, Miriam Hillawi Abraham.
Set in the Unesco world heritage site of Lalibela, home to 12th-century rock-hewn churches, the game allows players to experience the story from three different male perspectives, including an Indiana Jones-style white saviour archaeologist who appears to be set on looting the site’s treasures. As a foil to these familiar patriarchal perspectives, however, is a fourth female character, formed from a combination of figures that Abraham discovered had been overlooked in the official history of the site. It’s a clever way to use this playable, interactive medium to question accepted narratives and open up new perspectives on archeological heritage.
The limits of AI … Matias del Campo and Sandra Manninger’s Doghouse.
Other projects explore the reach of the virtual into the home. Researcher and designer Simone C Niquille takes a pleasingly sideways look at the hidden workings of domestic smart technology in her short film, Homeschool , which she made using the 3D datasets for training consumer robots, such as Roomba vacuum cleaners, on how to navigate our homes. It is filmed, in grainy computational vision, from the perspective of a roaming robo-cleaner, and narrated by its innocent childlike voice, as it encounters new objects that it hadn’t been programmed to recognise. The result is a poetic meditation on the pitfalls of robotic intelligence, making visible the hidden training data sealed inside the smart tech, and raising questions about categorisation and cultural bias built into these model digital environments. It is rendered with a beguiling, lo-fi aesthetic (made by using an artificially intelligent denoising filter, trained on thousands of images of domestic scenes), making it look as if this little vacuum cleaner might have made the film all by itself. Who knows, maybe it did? Such a broad topic has inevitably resulted in a show that feels a bit hit and miss. There are too many mindless renders of Instagram-friendly spaces that look like Aesop concept stores or oligarchs’ villas and a tedious film of an imaginary train ride through CGI landscapes (also minted as an NFT, natch). But there are plenty of other things to chew on. Spanish-Swedish duo Space Popular are showing a second, expanded iteration of their Portal Galleries ( first shown at the Sir John Soane’s Museum last year ), exploring the future mechanics of moving between different virtual worlds. Detroit-based architect and game designer Jose Sanchez has developed a pair of simulation games, one geared towards growing an ecological city, the other exploring community collaboration and the equitable growth of neighbourhoods.
Kordae Jatafa Henry has made a stirring short film addressing the future of rare earth mines in The Democratic Republic of the Congo, imagining a time when these sites of extraction are reclaimed through dance and ritual.
A fitting conclusion to proceedings … Leah Wulfman’s My Mid Journey Trash Pile, 2022.
Elsewhere, we see the limits of AI applied to an architectural context and perhaps a generational difference in how designers are approaching these tools.
Matias del Campo and Sandra Manninger – who have been “working with new technologies and artificial intelligence since the 1990s” according to the caption – have used Midjourney to generate cross-section drawings of imaginary buildings for animals. For the exhibition, they have tried to translate this into three dimensions, by CNC-milling a polystyrene “doghouse” based on one of the AI images. Midjourney might be impressive in 2D, but the result in 3D falls flat, simply standing as a four-sided box made of the extruded sections. Still, it might come as a relief to architects that they’re not fully replaceable quite yet.
Finally, our current predicament is aptly skewered by Leah Wulfman in a project called My Mid Journey Trash Pile , which provides a fitting conclusion to proceedings. While others are using AI to conjure fantasy villas and dreamy sci-fi cities, Wulfman is holding up a mirror to the great AI experiment – and reflecting a heap of trash. Their project features hundreds of images of tattered buildings made of plastic bags, recycled bottles, refuse sacks and piles of old junk, the wonky, battered forms suggesting things such as water towers, mills or grain silos – words that Wulfman uses in the AI prompts. For this exhibition, they commissioned a series of oil paintings of their images from a Chinese painting factory, adding an extra layer of manual interpretation to the automated visions. The result is a smeary feedback loop of human and digital supply chains, left intentionally unclear whose intelligence, and whose glitches, we are looking at. It is an unnerving apparition of a possible post-digital world, a place hastily cobbled together from the landfill of 21st-century detritus – a shanty world where we can dream of lounging on petal armchairs in sleek cliff-top villas, rendered in soothing pastel shades.
Explore more on these topics Art and design Architecture Austria Artificial intelligence (AI) features Most viewed Most viewed Film Books Music Art & design TV & radio Stage Classical Games News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
309 | 2,023 | "Risk of extinction by AI should be global priority, say experts | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/may/30/risk-of-extinction-by-ai-should-be-global-priority-say-tech-experts" | "Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,’ the experts said.
Photograph: S Decoret/Shutterstock ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,’ the experts said.
Photograph: S Decoret/Shutterstock Artificial intelligence (AI) Risk of extinction by AI should be global priority, say experts Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war Tue 30 May 2023 13.10 EDT A group of leading technology experts from across the world have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars.
The statement , signed by hundreds of executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology posed to humanity.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said. Signatories included the chief executives of Google’s DeepMind, the ChatGPT developer OpenAI, and the AI startup Anthropic.
Global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology owing to existential fears it could significantly affect job markets , harm the health of millions and weaponise disinformation, discrimination and impersonation.
This month the man often touted as the godfather of AI – Geoffrey Hinton, also a signatory – quit Google citing its “existential risk”.
The risk was echoed and acknowledged by No 10 last week for the first time – a swift change of tack within government that came two months after publishing an AI white paper industry figures have warned is already out of date.
While the letter published on Tuesday is not the first, it is potentially the most impactful given its wider range of signatories and its core existential concern, according to Michael Osborne, a professor in machine learning at the University of Oxford and co-founder of Mind Foundry.
“It really is remarkable that so many people signed up to this letter,” he said. “That does show that there is a growing realisation among those of us working in AI that existential risks are a real concern.” AI’s potential to exacerbate existing existential risks such as engineered pandemics and military arms races are concerns that led Osborne to sign the public letter, along with AI’s novel existential threats.
Calls to curb threats follow the success of ChatGPT , which launched in November. The language model has been widely adopted by millions of people and rapidly advanced beyond predictions by those best informed in the industry.
Osborne said: “Because we don’t understand AI very well there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we’ve designed that might play some devastating role in our survival as a species.” Explore more on these topics Artificial intelligence (AI) Computing OpenAI ChatGPT news Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
310 | 2,023 | "No 10 acknowledges ‘existential’ risk of AI for first time | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/may/25/no-10-acknowledges-existential-risk-ai-first-time-rishi-sunak" | "Rishi Sunak meets heads of firms including DeepMind and OpenAI to discuss safety and regulation US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness Rishi Sunak met heads of the world’s leading AI research groups including OpenAI, the developer of ChatGPT.
Photograph: Michael Dwyer/AP Rishi Sunak met heads of the world’s leading AI research groups including OpenAI, the developer of ChatGPT.
Photograph: Michael Dwyer/AP Artificial intelligence (AI) No 10 acknowledges ‘existential’ risk of AI for first time Rishi Sunak meets heads of firms including DeepMind and OpenAI to discuss safety and regulation and Thu 25 May 2023 09.27 EDT The “existential” risk of artificial intelligence has been acknowledged by No 10 for the first time, after the prime minister met the heads of the world’s leading AI research groups to discuss safety and regulation.
Rishi Sunak and Chloe Smith, the secretary of state for science, innovation and technology, met the chief executives of Google DeepMind, OpenAI and Anthropic AI on Wednesday evening and discussed how best to moderate the development of the technology to limit the risks of catastrophe.
“They discussed safety measures, voluntary actions that labs are considering to manage the risks, and the possible avenues for international collaboration on AI safety and regulation,” the participants said in a joint statement.
“The lab leaders agreed to work with the UK government to ensure our approach responds to the speed of innovations in this technology both in the UK and around the globe.
“The PM and CEOs discussed the risks of the technology, ranging from disinformation and national security, to existential threats … The PM set out how the approach to AI regulation will need to keep pace with the fast-moving advances in this technology.” It is the first time Sunak has acknowledged the potential “existential” threat of developing a “superintelligent” AI without appropriate safeguards, a risk that contrasts with the UK government’s generally positive approach to AI development.
Sunak will meet Sundar Pichai, the Google chief executive, on Friday as he continues to hone the government’s approach to regulating the industry. Pichai wrote in the Financial Times this week: “I still believe AI is too important not to regulate, and too important not to regulate well.” OpenAI’s chief executive, Sam Altman, published a call this week for world leaders to establish an international body similar to the International Atomic Energy Agency, which regulates atomic weapons, in order to limit the speed at which such AI is developed.
Altman, who has been touring Europe meeting users and developers of the ChatGPT platform as well as policymakers, told an event in London that, while he did not want the short-term rules to be too restrictive, “if someone does crack the code and build a superintelligence … I’d like to make sure that we treat this at least as seriously as we treat, say, nuclear material”.
Sign up to TechScape Free weekly newsletter Alex Hern's weekly dive in to how technology is shaping our lives Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy.
We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion The UK’s approach to AI regulation has come under fire from some quarters for its light-touch approach. At a Guardian Live event earlier this week , Stuart Russell, a professor of computer science at University of California at Berkeley, criticised the UK for relying on a mishmash of existing regulators rather than working out how best to regulate the field to ensure everything from labour market effects to existential risk were minimised.
Explore more on these topics Artificial intelligence (AI) Computing Rishi Sunak Internet safety news Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
311 | 2,023 | "OpenAI leaders call for regulation to prevent AI destroying humanity | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/may/24/openai-leaders-call-regulation-prevent-ai-destroying-humanity" | "Team behind ChatGPT say equivalent of atomic watchdog is needed to guard against risks of ‘superintelligent’ AIs US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness The OpenAI leaders say superintelligence will be more powerful than other technologies humanity has had to contend with in the past.
Photograph: Andrey Armyagov/Alamy The OpenAI leaders say superintelligence will be more powerful than other technologies humanity has had to contend with in the past.
Photograph: Andrey Armyagov/Alamy Artificial intelligence (AI) OpenAI leaders call for regulation to prevent AI destroying humanity Team behind ChatGPT say equivalent of atomic watchdog is needed to guard against risks of ‘superintelligent’ AIs UK technology editor Wed 24 May 2023 01.00 EDT The leaders of the ChatGPT developer OpenAI have called for the regulation of “superintelligent” AIs, arguing that an equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of accidentally creating something with the power to destroy it.
In a short note published to the company’s website, co-founders Greg Brockman and Ilya Sutskever and the chief executive, Sam Altman, call for an international regulator to begin working on how to “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security” in order to reduce the “existential risk” such systems could pose.
“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” they write. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.” In the shorter term, the trio call for “some degree of coordination” amongcompanies working on the cutting-edge of AI research, in order to ensure the development of ever-more powerful models integrates smoothly with society while prioritising safety. That coordination could come through a government-led project, for instance, or through a collective agreement to limit growth in AI capability.
Researchers have been warning of the potential risks of superintelligence for decades, but as AI development has picked up pace those risks have become more concrete. The US-based Center for AI Safety (CAIS), which works to “reduce societal-scale risks from artificial intelligence”, describes eight categories of “catastrophic” and “existential” risk that AI development could pose.
While some worry about a powerful AI completely destroying humanity, accidentally or on purpose, CAIS describes other more pernicious harms. A world where AI systems are voluntarily handed ever more labour could lead to humanity “losing the ability to self-govern and becoming completely dependent on machines”, described as “enfeeblement”; and a small group of people controlling powerful systems could “make AI a centralising force”, leading to “value lock-in”, an eternal caste system between ruled and rulers.
OpenAI’s leaders say those risks mean “people around the world should democratically decide on the bounds and defaults for AI systems”, but admit that “we don’t yet know how to design such a mechanism”. However, they say continued development of powerful systems is worth the risk.
Sign up to TechScape Free weekly newsletter Alex Hern's weekly dive in to how technology is shaping our lives Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy.
We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion “We believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity),” they write. They warn it could also be dangerous to pause development. “Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on. Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.” Explore more on these topics Artificial intelligence (AI) Computing ChatGPT OpenAI Consciousness news Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |
312 | 2,023 | "‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity | Artificial intelligence (AI) | The Guardian" | "https://www.theguardian.com/technology/2023/may/05/geoffrey-hinton-godfather-of-ai-fears-for-humanity" | "Geoffrey Hinton recently quit Google warning of the dangers of artificial intelligence. Is AI really going to destroy us? And how long do we have to prevent it? US edition US edition UK edition Australia edition International edition Europe edition The Guardian - Back to home The Guardian News Opinion Sport Culture Lifestyle Show More Show More document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('News-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('News-checkbox-input').click(); } }) }) News View all News US news World news Environment US politics Ukraine Soccer Business Tech Science Newsletters Wellness document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Opinion-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Opinion-checkbox-input').click(); } }) }) Opinion View all Opinion The Guardian view Columnists Letters Opinion videos Cartoons document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Sport-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Sport-checkbox-input').click(); } }) }) Sport View all Sport Soccer NFL Tennis MLB MLS NBA NHL F1 Golf document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Culture-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Culture-checkbox-input').click(); } }) }) Culture View all Culture Film Books Music Art & design TV & radio Stage Classical Games document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('Lifestyle-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('Lifestyle-checkbox-input').click(); } }) }) Lifestyle View all Lifestyle Wellness Fashion Food Recipes Love & sex Home & garden Health & fitness Family Travel Money Search input google-search Search Support us Print subscriptions document.addEventListener('DOMContentLoaded', function(){ var columnInput = document.getElementById('US-edition-button'); if (!columnInput) return; // Sticky nav replaces the nav so element no longer exists for users in test. columnInput.addEventListener('keydown', function(e){ // keyCode: 13 => Enter key | keyCode: 32 => Space key if (e.keyCode === 13 || e.keyCode === 32) { e.preventDefault() document.getElementById('US-edition-checkbox-input').click(); } }) }) US edition UK edition Australia edition International edition Europe edition Search jobs Digital Archive Guardian Puzzles app Guardian Licensing The Guardian app Video Podcasts Pictures Inside the Guardian Guardian Weekly Crosswords Wordiply Corrections Facebook Twitter Search jobs Digital Archive Guardian Puzzles app Guardian Licensing US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness Hinton: ‘Overreacting is a lot better than under-reacting.’ Photograph: Sarah Lee/The Guardian Hinton: ‘Overreacting is a lot better than under-reacting.’ Photograph: Sarah Lee/The Guardian Artificial intelligence (AI) ‘We’ve discovered the secret of immortality. The bad news is it’s not for us’: why the godfather of AI fears for humanity Geoffrey Hinton recently quit Google warning of the dangers of artificial intelligence. Is AI really going to destroy us? And how long do we have to prevent it? Fri 5 May 2023 11.01 EDT T he first thing Geoffrey Hinton says when we start talking, and the last thing he repeats before I turn off my recorder, is that he left Google, his employer of the past decade , on good terms. “I have no objection to what Google has done or is doing, but obviously the media would love to spin me as ‘a disgruntled Google employee’. It’s not like that.” It’s an important clarification to make, because it’s easy to conclude the opposite. After all, when most people calmly describe their former employer as being one of a small group of companies charting a course that is alarmingly likely to wipe out humanity itself, they do so with a sense of opprobrium. But to listen to Hinton, we’re about to sleepwalk towards an existential threat to civilisation without anyone involved acting maliciously at all.
Known as one of three “godfathers of AI” , in 2018 Hinton won the ACM Turing award – the Nobel prize of computer scientists for his work on “deep learning”. A cognitive psychologist and computer scientist by training, he wasn’t motivated by a desire to radically improve technology: instead, it was to understand more about ourselves.
“For the last 50 years, I’ve been trying to make computer models that can learn stuff a bit like the way the brain learns it, in order to understand better how the brain is learning things,” he tells me when we meet in his sister’s house in north London, where he is staying (he usually resides in Canada). Looming slightly over me – he prefers to talk standing up, he says – the tone is uncannily reminiscent of a university tutorial, as the 75-year-old former professor explains his research history, and how it has inescapably led him to the conclusion that we may be doomed.
In trying to model how the human brain works, Hinton found himself one of the leaders in the field of “neural networking”, an approach to building computer systems that can learn from data and experience. Until recently, neural nets were a curiosity, requiring vast computer power to perform simple tasks worse than other approaches. But in the last decade, as the availability of processing power and vast datasets has exploded, the approach Hinton pioneered has ended up at the centre of a technological revolution.
“In trying to think about how the brain could implement the algorithm behind all these models, I decided that maybe it can’t – and maybe these big models are actually much better than the brain,” he says.
A “biological intelligence” such as ours, he says, has advantages. It runs at low power, “just 30 watts, even when you’re thinking”, and “every brain is a bit different”. That means we learn by mimicking others. But that approach is “very inefficient” in terms of information transfer. Digital intelligences, by contrast, have an enormous advantage: it’s trivial to share information between multiple copies. “You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So the good news is, we’ve discovered the secret of immortality. The bad news is, it’s not for us.” Once he accepted that we were building intelligences with the potential to outthink humanity, the more alarming conclusions followed. “I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I don’t think that any more. And I don’t know any examples of more intelligent things being controlled by less intelligent things – at least, not since Biden got elected.
“You need to imagine something more intelligent than us by the same difference that we’re more intelligent than a frog. And it’s going to learn from the web, it’s going to have read every single book that’s ever been written on how to manipulate people, and also seen it in practice.” He now thinks the crunch time will come in the next five to 20 years, he says. “But I wouldn’t rule out a year or two. And I still wouldn’t rule out 100 years – it’s just that my confidence that this wasn’t coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better.” There’s still hope, of sorts, that AI’s potential could prove to be over-stated. “I’ve got huge uncertainty at present. It is possible that large language models,” the technology that underpins systems such as ChatGPT , “having consumed all the documents on the web, won’t be able to go much further unless they can get access to all our private data as well. I don’t want to rule things like that out – I think people who are confident in this situation are crazy.” Nonetheless, he says, the right way to think about the odds of disaster is closer to a simple coin toss than we might like.
This development, he argues, is an unavoidable consequence of technology under capitalism. “It’s not that Google’s been bad. In fact, Google is the leader in this research, the core technical breakthroughs that underlie this wave came from Google, and it decided not to release them directly to the public. Google was worried about all the things we worry about, it has a good reputation and doesn’t want to mess it up. And I think that was a fair, responsible decision. But the problem is, in a capitalist system, if your competitor then does do that, there’s nothing you can do but do the same.” He decided to quit his job at Google, he has said , for three reasons. One was simply his age: at 75, he’s “not as good at the technical stuff as I used to be, and it’s very annoying not being as good as you used to be. So I decided it was time to retire from doing real work.” But rather than remain in a nicely remunerated ceremonial position, he felt it was important to cut ties entirely, because, “if you’re employed by a company, there’s inevitable self-censorship. If I’m employed by Google, I need to keep thinking, ‘How is this going to impact Google’s business?’ And the other reason is that there’s actually a lot of good things I’d like to say about Google, and they’re more credible if I’m not at Google.” Since going public about his fears, Hinton has come under fire for not following some of his colleagues in quitting earlier. In 2020, Timnit Gebru, the technical co-lead of Google’s ethical AI team, was fired by the company after a dispute over a research paper spiralled into a wide-ranging clash over the company’s diversity and inclusion policies. A letter signed by more than 1,200 Google staffers opposed the firing, saying it “heralds danger for people working for ethical and just AI across Google”.
But there is a split within the AI faction over which risks are more pressing. “We are in a time of great uncertainty,” Hinton says, “and it might well be that it would be best not to talk about the existential risks at all so as not to distract from these other things [such as issues of AI ethics and justice]. But then, what if because we didn’t talk about it, it happens?” Simply focusing on the short-term use of AI, to solve the ethical and justice issues present in the technology today, won’t necessarily improve humanity’s chances of survival at large, he says.
Not that he knows what will. “I’m not a policy guy. I’m just someone who’s suddenly become aware that there’s a danger of something really bad happening. I want all the best brains who know about AI – not just philosophers, politicians and policy wonks but people who actually understand the details of what’s happening – to think hard about these issues. And many of them are, but I think it’s something we need to focus on.” Since he first spoke out on Monday, he’s been turning down requests from the world’s media at a rate of one every two minutes (he agreed to meet with the Guardian, he said, because he has been a reader for the past 60 years, since he switched from the Daily Worker in the 60s). “I have three people who currently want to talk to me – Bernie Sanders, Chuck Schumer and Elon Musk. Oh, and the White House.
I’m putting them all off until I have a bit more time. I thought when I retired I’d have plenty of time to myself.” Throughout our conversation, his lightly jovial tone of voice is somewhat at odds with the message of doom and destruction he’s delivering. I ask him if he has any reason for hope. “Quite often, people seem to come out of situations that appeared hopeless, and be OK. Like, nuclear weapons: the cold war with these powerful weapons seemed like a very bad situation. Another example would be the ‘Year 2000’ problem. It was nothing like this existential risk, but the fact that people saw it ahead of time and made a big fuss about it meant that people overreacted, which was a lot better than under-reacting.
“The reason it was never a problem is because people actually sorted it out before it happened.” Explore more on these topics Artificial intelligence (AI) Google Computing interviews Most viewed Most viewed US World Environment US Politics Ukraine Soccer Business Tech Science Newsletters Wellness News Opinion Sport Culture Lifestyle About us Help Complaints & corrections SecureDrop Work for us Privacy policy Cookie policy Terms & conditions Contact us All topics All writers Digital newspaper archive Facebook YouTube Instagram LinkedIn Twitter Newsletters Advertise with us Guardian Labs Search jobs Back to top
" |