Social Media and the Battlefield: Human Error in Modern Warfare

cover
14 May 2024

The Bottom Line Upfront

Modern irregular warfare pivots to exploit social media for real-time battlefield impacts. This pivot is made possible by the weak link of human error on networking sites. A series of recent conflict events, either directly or indirectly facilitated by human errors on social media, expose the ground real dangers social media-sourced information warfare now presents.

Human Error Advantages

Using social media to exploit human vulnerabilities is a low-risk, high-reward approach that nation-states can utilize. A recent example of this is when Hamas used social media honeypots to try and gather intelligence on the Israel Defense Force.

Carl von Clausewitz's principle in "On War" is that "War is the realm of uncertainty; three-quarters of the factors on which action is based are wrapped in a fog of greater or lesser uncertainty," nation-states are thus driven to exploit any advantage in battle. The realm of social media is volatile for battlefields because anyone, anywhere, can post to the feed within minutes, opening up new ranges of exposure for warfighters to exploit rapidly.

Modern Battlefields and the OSINT Outcome Evolution

While social media was observed to shape political outcomes in such historical events as the Arab Spring, consequences of open-source intelligence harvesting on these sites now have broader, higher risk implications.

The fatal impact of social media harvesting has been observed as a slow burn. In 2023, Ukraine was able to use information from Strava, an app used by runners and cyclists to publish their runs, to assassinate a Russian submarine commander. Stanislav Rzhitsky was in charge of a submarine that was blamed for a 2022 strike on the Ukrainian city of Vinnytsia, which killed 27 civilian deaths. By using information posted to social media, Ukraine was able to track down Rzhitsky, and he was fatally shot.

The Case of Ukraine Aid Stall

Broad risk implications can maintain political influence, deal with death from a distance, and postpone vital logistics. In the case of the Ukraine war effort, Russia has used social media at a high level to try to redirect foreign support of Ukraine’s defense effort and thus ultimately alter the outcome of the Kremlin’s incursion into Ukrainian territory.

A specific example of this stems from the direct impact Russian disinformation tactics have had on stalling military aid packages to Ukraine. While the US Congress was debating an aid package for Ukraine, Russian disinformation actors exploited US political tensions to their advantage, circulating false reports on social media that went as far as Congress.

US Senator Thom Tillis, in December 2023, said that talks on Ukraine aid had hit a wall as certain lawmakers expressed concerns that “people will buy yachts with this money.” As a result, the social media influence by Russia had a direct impact on aid that was flowing to Ukraine’s battlefield, muddying the debate with the US Congress.

Concerns Tillis and Congress expressed over the use of Ukraine funding for personal luxuries were traced by researchers to intensive disinformation circulating in Western media that Russian intelligence services helped to push as part of disinformation tactics, MSNBC stated on the Rachel Maddow show during a March 12 broadcast where the program discussed how propaganda had “duped” two Republican Congress members.

Fake Personas and Narrative Laundering

A story was published by DC Weekly and alleged that Ukraine President Volodymyr Zelensky had used Western aid to buy a luxury yacht. Journalists and disinformation specialists noted that falsified narratives had gone so far as to be “echoed” by members of Congress, with the DC Weekly story being retweeted by Rep. Marjorie Taylor Greene.

Researchers underscore The DC Weekly as an example of a site that serves to harvest content for a “narrative-laundering campaign” for Russia disinformation, research by Clemson University carried out in 2023 concluded, as was reported by The Lawfare Institute.

DC Weekly posts that were hyper-critical of Ukraine and President Volodymyr Zelensky were published under a fake persona. Shayan Sardarizadeh, a journalist at the BBC’s anti-disinformation project BBC Verify, discovered the fake persona when he noticed the persona’s photograph did not match the persona’s name. The name used was Jessica Devlin, who, DC Weekly claimed, was a “highly acclaimed journalist in New York City.” However, Sardarizadeh noted that the photograph matched author Judy Battalion.

This case served as an incident when traditional fake news and social media information wars were blended to create discord and disrupt Ukraine's diplomatic advantages. However, nation-states also try to scout and extract as much information from social media and open-source intelligence as possible to try and direct battlefield strikes.

Nation States Scout Socials.

Ukraine has been prolific in leveraging these tools to gather intelligence and direct attacks. In just the first month of the full-scale invasion, 260,000 individuals leveraged Diia, a mobile application and online portal for government services, to report on Russian activities. Ordinary citizens were able to use the “e-Enemy” function, to submit information about Russian troop movements, which could then be utilized by the military.

While nation-states can blanket harvest socials and scout for direct attacks, actors are also becoming more creative in narrowing down targets, using the human desire for companionship against itself for precision striking advantages.

Tinder Tricks

Since Russia's full-scale invasion of Ukraine, Russian soldiers have been tricked into revealing sensitive information via Tinder. Some disclosed their tactical locations through profile images while searching for companionship. One clever woman used dual Tinder accounts with varied border locations to pinpoint and report over seventy such profiles to Ukrainian authorities. She made new fake female profiles while also slightly photoshopping images from Google so they couldn’t be reverse-engineered.

Ukrainian hackers also created fake profiles on platforms like Telegram to lure Russian soldiers near Melitopol into sharing on-duty photos. These images helped locate a Russian military base, leading to a targeted Ukrainian military strike days later.

On August 8, 2022, a local pro-Russian journalist shared photos online and unintentionally compromised the location of their base. Within a few days, Ukraine struck the base with rockets. The journalist had shared images on Telegram that included visible details such as the address of the base, sufficient enough to pinpoint the Wagner base's precise location.

Fake Goodwill Messages and Phishing Tactics

On Russian Navy Day in July 2023, Ukrainian hackers targeted Russian sailors by sending videos with deceptive "good wishes" via messaging apps. These videos, which showed Ukrainian attacks on Russian ships, contained malware that breached the sailors' phones, extracting confidential data for Ukrainian use. Many sailors thanked the senders before realizing the videos' true intent.

In November 2023, Russian forces infiltrated Ukrainian devices through a phishing operation targeting specific military personnel. By sending deceptive messages, they were able to access at least one device from a Ukrainian soldier, an investigation found. Ukraine’s counterintelligence services said that there have been over 1700 attempts to infiltrate the devices of Ukrainian defense forces.

Ukraine’s former Security Service of Ukraine (SBU) cybersecurity chief, Illia Vitiuk, said that a compromised device revealed the precise date and location of a gathering of Ukrainian soldiers on the Signal messaging app. Shortly later, armed with this data, the Russians executed a missile strike on that location, killing up to 30 Ukrainian soldiers. However, a preliminary investigation also found that Russian drones were operating in the area and that soldiers ignored an air alert warning. This shows that it could have been a combination of both, the hacked data guiding the Russians to send drones to the area to verify the gathering and then launching a calculated strike.

ChatGPT as a Tool of War

The battlefield in Ukraine is not the only place where modern battlefields are being altered by social Media. North Korean hackers have been using artificial intelligence tools like ChatGPT, to conduct sophisticated attacks, there is no doubt that nation-states will use a tool like ChatGPT to influence the battlefield.

For example, such as in the case of fake Tinder women reaching out to Russian soldiers and extracting important intelligence, if either side can identify low-level soldiers on the battlefield, they can use AI to build a dossier or a profile on an individual and more easily trick them into revealing information.

Information harvesting and accidental revelation were found to be expounded upon when artificial intelligence was thrown into the mix.

AI and OSINT

Researchers from The Alan Turning Institute used AI agents to collect OSINT on a specified target and then the system was able to build a “dossier on an individual and permit users to ask questions about them.” In this case, the team evaluated llm_osint, which uses large language models (LLMs) to gather information from the internet and then can perform actions based on the gathered information. Using the tool, the researchers requested that a phishing email be built that could be sent to Alan Turing. They found the AI agents to be useful “from gathering sources to curate an initial draft of a dossier to helping develop personalized messages for targeted social engineering attacks.”

The resulting example phishing email was prolific and, to an unassuming soldier, can trick him into revealing all sorts of information about his unit, position, and other important details needed to conduct a missile strike.

Digital Transformation of Battlefield Stakes

The battlefield has expanded beyond the physical landscape to encompass the extensive, interconnected domain of the internet, where every click or post can have as much impact as a conventional military operation. The result is a complete disruption of what is currently understood as the status quo of warfare in the digital age.

Within this growing digitization of our societies, cybersecurity researchers expect phishing tactics and social media exploit attacks to grow exponentially in the future. In April 2024, POLITICO reported a deliberate campaign to compromise politicians, officials, and journalists working in the U.K. parliament. These individuals are being targeted with enticing personalized messages and explicit images, which appear to be part of a calculated effort to manipulate or coerce them into compromising situations, continuing to add to the fear and loathing of modern warfare online.