09 Dec LEFT OF BEEP: THE UNITED STATES NEEDS AN ALGORITHMIC WARFARE GROUP
Read full article on War On The Rocks
“We look like assholes with that roach up on there,” the Army sergeant said, revealing his bloodshot eyes as he tilted his augmented-reality rig back on his helmet.
“Chernobyl Ant,” mused the scout platoon’s lieutenant, running a gloved hand over the bunches of red and black fabric, bound in silvery reflective line. Fuzzy-looking stalks jutted forward, lined up with the turret’s missile mounts. “Used to fish dry flies like these in Yellowstone, run ‘em right along the bank. Drove the trout nuts.”
“It’s not supposed to look like a roach or an ant. It’s a ladybug,” said Willie. He moved with care as he descended from the vantablack, rattle-can-painted Stryker, using pock marks in the wheeled vehicle’s armor as toe holds. The lieutenant climbed down after the Algorithmic Warfare Group advisor, wondering how this 28-year-old MIT data science Ph.D. was really holding up. He still had his rock climber’s shoulders, but he now moved like a much older man — perpetually stooped by the weight of his laptop backpack that he never took off. They’d all had to run, far more than they ever expected, as the platoon dodged Russian artillery barrages once a day, never staying put more than a few hours at the most.
“Well, I’ll take a picture for my daughter then,” said the lieutenant.
“Woah, dude. Careful there. Don’t kick that shroom,” said Willie, watching the officer climb down to the wet mud. Like the giant insect decoy on top of the Stryker’s turret, the purple mushroom-like foam pieces glued to its reactive armor were designed to fool artillery spotter drones.
These organic shapes confounded machine vision and made the Stryker look like a civilian panel van. They’d been on the Strykers for a couple weeks. Then, 24 hours ago, the Algorithmic Warfare Group’s (AWG’s) adversarial systems back at Fort Meade had devised the ladybug countermeasures, and the designs were rushed out to the dozens of AWG advisors in the European theater who printed, crafted, and constructed them as fast as they could. The advisors worked with their hands a lot, but many of them were also as adept at finding relevant local data, messy as it might be, that could help identify targets and feed back into predictive models at the Fort. Their predecessors had moved around Iraq and Afghanistan schooling U.S. forces on how to combat advanced improvised explosive devices and grenade-dropping quadcopters. This new generation of advisors moved between Army and Marine Corps companies and platoons throughout the Baltics helping American and allied forces dodge Russian drones and thwart disinformation campaigns.
“So they have three-foot ladybugs in Lithuania?” a private asked. “At least that’s what the Russians think, you’re telling me? I don’t know, man.”
“It’ll work,” said Willie. “Trust me.”
“I trust you plenty, just not the AIs behind. If our company gets wiped out when the Russians waste an entire grid square, I guess we’ll know,” said the lieutenant. He’d heard stories about the old AWG in Afghanistan, grizzled Delta-type operators trooping around with the grunts and doling out Obi-Wan Kenobi warrior wisdom while they snacked on handfuls of Motrin. This was something altogether different.
“That’s data too,” said Willie. “It’s all just data.”
In and around the Defense Department, there is an awareness that the prevailing sides of future conflicts will be those who persistently take a bottom-up approach to designing, experimenting with, and fielding capabilities centered around AI systems. Yet, there is an underappreciated risk that now this building wave of knowledge and innovation won’t actually filter down in time to shape the tactics of frontline U.S. units, where it will be badly needed.
It’s time to establish a joint Algorithmic Warfare Group that turns the first AWG’s “left of boom” mantra into “left of beep.” An “AWG 2.0” would focus on the inevitable autonomy and robotics move-countermoves of 2020s conflict. Its inspiration is the up-close advisory work of the U.S. Army’s Asymmetric Warfare Group, which recently cased its colors. From countering novel threats from drones or grenades in Iraq to pathfinding tactics for underground operations, until it shut down in 2020, the Asymmetric Warfare Group focused on helping the Army adapt during wartime. Its members, often former special operations forces, were both observers and advisors who teamed with handpicked enlisted soldiers and officers from units working in harm’s way. An AWG 2.0 advisory unit would develop countermeasures to AI-driven combat and information systems like machine vision and autonomous drones. It would also develop new rules for acquiring, sharing, and processing the myriad data necessary to prevail in this new era of conflict.
Starting this decade, AI — which is essentially a family of software systems of greater and greater self-directed and thought-like capabilities — will bring to warfare a generational change and will introduce new risk for the U.S. military. The National Security Commission on Artificial Intelligence concluded in its final report this summer that “America is not prepared to defend or compete in the AI era” and that the Defense Department, and the intelligence community, should become “AI ready” by 2025.
AI will help U.S. commanders speed up machine and human decisions in the heat of battle, allow ever more connected autonomous robotic systems to take center stage in kinetic operations, and introduce unequaled operational efficiencies that will save money and lives with everything from personnel management to aircraft maintenance. How quickly and effectively each group has integrated these new software systems will heavily influence, if not determine, which side will prevail in conflicts. To get a sense of how transformative it will be to the U.S. military, consider the central role different AI capabilities have at some of the world’s most powerful — and valuable — American companies in sectors like social media (Facebook and Twitter), data search and management (Google and Amazon), personal technology (Apple), and transportation (Uber and Tesla).
Move Fast and Build Things
The AWG 2.0 could eventually support all of the services but should begin by helping ground units develop adversarial deception, targeting, and effects tactics to degrade or deny an opponent’s AI systems. As the vignette above shows, AWG advisors could help dispersed Army units spoof machine-vision software used by an adversary’s low-flying artillery-spotter drones. AI-powered systems on those drones will be common as commanders drown in more video feed and visual data than human analysts can keep up with. Tricking those machine-vision systems will help U.S. forces hide in plain sight, or at least buy them time to leave an area or prepare for contact. This might be done by literally crafting glue-on three-dimensional objects to confound machine vision or simpler visual tricks that result in pixel spoofing that can “turn a car into a dog.”
There is a wealth of knowledge in the U.S. commercial sector and academic communities about what doesn’t work in developing AI systems. All those lessons learned, including the big problems plaguing commercial AI — such as why some driverless cars don’t see emergency vehicles — can be useful when those problems are reframed as opportunities to confound an enemy’s sensors. Given the perils of a “move fast and break things” mindset that’s produced ills such as rampant disinformation on social media and algorithmic bias in law enforcement data, AWG 2.0 leaders should go to great lengths to truly understand which lessons learned are the most relevant to their mission.
There is ample opportunity to draft, bureaucratically speaking, from this summer’s initiative unveiled by Deputy Defense Secretary Kathleen Hicks to send teams of data experts to all of the U.S. combatant commands to help identify areas that can be improved as the department lays a foundation for an AI-integrated force. This could start with Army and Marine Corps infantry, logistics, artillery, as well as special operations elements such as Army Special Forces and Marine Raiders. U.S. Special Operations Command’s tactical data teams point to the viability of this concept. Over time, wherever U.S. forces (including not just the Navy but the Coast Guard) are deployed in conflict zones, there should be AWG 2.0 advisors working with their carefully selected partners.
An AI Ph.D. Trained in CQB
In the near term, a first step would be to attach individual, or small teams of, AWG 2.0 advisors to Special Forces A-Teams and Raider teams on their deployments to the Baltics, the Arctic, and China’s Pacific periphery. At the outset, they would be observing and learning as much as advising, in turn disseminating that data and knowledge for the next phase. AWG advisors would then begin advising conventional units immediately on experimentation with what was learned from the deployed advisors. Further expansion — to the Navy’s surface fleet, for example — depends on myriad factors. But, given the many types of AI systems that can profoundly impact American forces, such ambitions are not out of line with the threat — or opportunity.
The group could start with about two dozen advisors and researchers, who would initially work on a short-term rotational basis with U.S. forces. Talent is everything in the hyper-competitive field of civilian AI research, and the same will be true for AWG 2.0. Though its focus would fundamentally be on an emerging technology that is admittedly hard to define even for experts, AWG 2.0’s success will come from its people more than code or data sets. The ideal advisor would be a lot like Willie in the narrative above: an AI-focused Ph.D. trained in close-quarters battle, to riff on the ideal of the Office of Strategic Services described by William Donovan during World War II as a Ph.D. who could win a bar fight. Different than AWG 1.0’s heavy draw from the special operations community, AWG 2.0 advisors would ideally have academic, government, or private-sector experience in computer and data sciences, along with related military training or education. The AWG program would start them out with rudimentary tactics and battlefield proficiency to facilitate integration into operational units, with the idea of building “soldier” skills out over time.
Traditional staffing and work arrangements won’t likely work, even more so in the pandemic era. AWG 2.0 needs to embrace a distributed or network organizational model that dispenses with hierarchy and allows for flexibility to attract and retain the unique people who can be advisors. While not cheap, the investment in flexibility up front will pay off in future effectiveness because the right people will be doing the work. They need not have military experience to be sure but it would make sense to pair those who don’t with someone who does when they are sent forward to work with U.S. units.
Where to nest a potentially disruptive organization like an AWG 2.0 within the Defense Department may be a thorny issue. One sensible path would be to create a hub at Fort Meade as the Asymmetric Warfare Group once had. AWG 2.0 would bureaucratically exist under the Joint Artificial Intelligence Center and its budget lines to keep it as close to the government and industry AI expertise and data that will be central to its success.
While there is a risk that linking AWG 2.0 with the Joint Artificial Intelligence Center saddles it with bureaucratic baggage, the reality is that a protective parent is crucial. A crucial job for AWG’s parent is to shield the sure-to-be disruptive people from pushback and to ensure AWG advisors have a direct line to their supported unit’s senior commanders — even all the way up to the service chiefs. The importance of this direct line of communication should not be underestimated: When AWG 2.0 members and their uniformed partners find instances out in the field where U.S. forces are ill-prepared for the algorithmic warfare era, it is sure to ruffle feathers, if not provoke outright hostility.
Wartime Data Access
After adversarial AI tactics, data access would be a second focus area for the Algorithmic Warfare Group. Getting a set of rules around wartime — not peacetime — data sharing and access is too important to leave until a crisis is underway. There are substantial legal, commercial, and normative hurdles to negotiating relationships between private and public sector entities and the militarily relevant information they have. By developing operational and tactical perspectives on which data is most relevant, AWG can be an important advocate — and testbed — for these new data-access paradigms.
Moreover, within these knotty questions is an opportunity for the Defense Department to lead on ethical data use. By taking an applied, but considered, approach leveraging recent policy work on “responsible” AI, American military data standards could set a high bar that the private sector might even seek to follow. Allies are integral to this, and identifying allied nations, and companies, who can quickly become partners in Europe and the Asia-Pacific should be a priority.
NATO allies in Northern Europe, such as Norway, or friendly nations along China’s periphery, like Australia, are also natural partners for AWG 2.0 when it comes to how to share relevant and often sensitive data. They are also wrestling with the same questions about how to transform their forces for the AI era while confronting the gaps they have in defeating threats like autonomous undersea systems; swarming aerial drones; and millions of highly targeted, rage-inducing social media posts. The operational relevance is real: In the narrative, a Lithuanian army intelligence unit that captured, and exploited, a Russian drone or control unit to identify its specific sensors or algorithms might have shared Willie’s information about which of his cartoon-like machine-vision countermeasures would be effective.
Reviving a scrappy concept that grew out of a war focused on counter-insurgency and counter-terrorism for AI-powered great-power war may seem as ridiculous as gluing a giant ladybug to a Stryker’s turret. But keep in mind Willie’s wisdom that “it’s all just data,” and it starts to make a lot more sense.
August Cole is a non-resident fellow at the Krulak Center For Innovation and Future Warfare at Marine Corps University and is a non-resident senior fellow at the Atlantic Council’s Scowcroft Center on Strategy and Security. A former Wall Street Journal reporter, he has written commissioned fiction in support of the Defense Science Board’s AI Ethics Task Force and the National Security Commission on Artificial Intelligence and has worked as a futures consultant for an AI company. He also leads the AI and Strategy team for the Peace Research Institute of Oslo’s “Warring with Machines” project. With Peter W. Singer, he is co-author of Ghost Fleet: A Novel of the Next World War (2015) and Burn-In: A Novel of the Real Robotic Revolution (2020). Cole and Singer are principals at Useful Fiction LLC, which works with U.S. and allied defense and security organizations on narratives, foresight, and training.
Photo by Spc. Savannah Miller