Technology: The Militarization of AI
The mainstreaming of Artificial Intelligence (AI) over the last few years has led to much speculation about its potential impact on various fields, ranging from education to the economy. However, if there is one field where AI applications will have a much more far-reaching impact is the field of security, driven by geopolitical considerations. This became clear in the recent confrontation between the American AI company, Anthropic, and the Pentagon. The latter is now on a trajectory to build AI-powered cyber tools which could enable it to conduct automatic reconnaissance of critical infrastructure within China and other adversary countries, such as its power grids, utilities, and sensitive networks. Such infrastructural reconnaissance is expected to help U.S. in any future military conflicts.
More dangerously, the purpose of adding AI as a new way to conduct cyber espionage – a field in which the U.S. was already active – is to identify software flaws within enemy networks and systems, and to exploit those weaknesses to disable the enemy from within. It can also help identify new targets in the event of a war – a task which traditionally required huge manpower and time. China is already active in this field through the use of Agentic AI and Chinese hackers had even targeted Anthropic few months ago. With the U.S. now becoming active in this domain, the Pentagon recently awarded contracts worth around $200 million to major American AI companies, including OpenAI, Anthropic, Google, and xAI. These contracts enable the American companies to partner with the U.S. government for military, cyber and security applications of AI.
However, the moral repercussions of this potential partnership came out recently when the contract negotiations between the Pentagon and Anthropic broke down and the Pentagon listed Anthropic as a supply chain risk – a label that is usually reserved for foreign adversaries and their companies. Such a listing would potentially endanger Anthropic’s future business prospects, as it would mean that working with Anthropic poses a risk to the American national security. It would also bar Anthropic from further contracts with the Pentagon. While initially Anthropic had decided to obtain the contract partnership with the U.S. government, it had laid out two conditions where it would not allow its AI applications to be used. These were use of AI for mass domestic surveillance of U.S. citizens, and the use of AI for developing fully autonomous lethal weapon systems, that is, weapons that can select and engage military targets without human involvement. The U.S. government, however, demanded ‘unfettered’ access to Anthropic’s AI applications and the removal of any safety restrictions. These conditions became a non-negotiable roadblock, resulting in Anthropic refusing to partner with the U.S. Department of Defence and being labelled a ‘supply chain risk.’ Instead of yielding to the American government, the company took the matter to the courts of appeal, where it is currently being fought. Interestingly, other American AI companies, such as OpenAI, yielded to the American government’s demands without qualms. Further, despite its fallout with Anthropic, the American government is continuing to use its applications, with Anthropic’s AI applications being used in the Iran war within 24 hours of labelling the company a supply chain risk.
Far-reaching Implications:
The stand-off between Anthropic and the Pentagon will have many far-reaching implications, with a lot depending on how this case is legally decided. Some of the immediate implications are as follows:
First, the dispute brings to fore the inevitability of AI deployment in military applications. While not a very long time ago, the debate centred around how AI use will impact military, we have quickly crossed that threshold, and AI is already being used in military applications. The question now is how many possible variations and combinations can such AI applications undertake. This reveals how fast AI application in military is transitioning. It is already being used in the Ukraine war, and it was widely used by Isreal in decimating Hamas. In last year’s twelve-day war between U.S.-Israel and Iran, AI was used to even select targets and take them out. In this year’s Iran war as well, Israel has heavily relied on AI for target identification.
Second, the most contentious aspect of this confrontation brought to fore the role of the private sector in AI regulation for military uses. Presently, the race towards the military application of AI is being shaped by two key countries – U.S.A. and China. Both offer competing visions of AI use. In case of China, which is characterized by an autocratic system, technological freedom is subordinated to state sovereignty and national security, offering seamless coordination between private sector and the state. In case of the US, whose system is characterized by democratic obligations, there is a conflict between the two domains, and the resultant partnership model between the state and the private sector does not always work without friction.
The current conflict with Anthropic bears this out. While Anthropic wanted continuous oversight over the use of its AI applications even after its acquisition by the Pentagon, the latter could not permit that. Continued oversight by Anthropic, including the right to impose restrictions on the use of its technology, would mean that matters of national security are now being held hostage to the decisions of the unaccountable, profit-driven private sector. No country would merely bow like this to a private company’s terms of service. No number of well-intentioned arguments in favour of human rights and civil liberties could justify that. If permitted, it would result in a new kind of a state structure where private technology companies would play an overwhelming role in critical state policy. Thus, if a private company’s intentions change from good to more ambitious, the state structure would be held hostage.
Finally, another critical question that this episode raises is why Anthropic wanted oversight and restrictions over the use of its AI applications in the first place. When Anthropic objected to the deployment of its AI in autonomous lethal weapons, the reason lay in the uncertainty surrounding AI expansion. For, unlike military hardware – like tanks, and artillery – AI is constantly evolving. So once a traditional defence manufacturer like Lockheed Martin or Rheinmetall may supply weapons to the state, they eventually cannot dictate how and where these weapons would be used. However, with AI that’s not the case. Even after deployment, the technology will keep changing and evolving.
Today, AI innovation is at a stage where even companies that create AI applications cannot predict or control how they will evolve. This will become more of a concern as AI develops new forms of intelligence and autonomous functions. While autonomous weapons systems have been around for decades, yet, integrating AI with autonomous weapons systems and allowing it to select and engage targets would be unprecedented, especially in case of a technology whose future development can neither be predicted nor controlled.
Nepal Elections: A New Political Configuration
The recent elections in Nepal saw an unprecedented victory of the Rashtriya Swatantra Party (RSP) and the election of its Prime Ministerial candidate, Balendra Shah. The four-year old party which won only 20 seats in the 2022 elections has now swept the elections with an overwhelming victory. This is despite a complex, mixed electoral process under the 2015 Nepali Constitution, which makes it difficult for a single party to win an outright majority. It won a commanding majority in 182 directly elected seats of the House of Representatives and about 48% of the proportional vote share. In contrast, the Nepali Congress won only 38 seats, with a 19.1% of the proportional vote. K.P. Sharma Oli’s communist party, CPN (UML), won only 25 seats, with 13.4% of the proportional vote. Oli was defeated in his stronghold constituency, Jhapa, by Balendra Shah, by a margin of nearly 50,000 votes. Pushpa Kamal Dahal’s communist party, NCP, won only 17 seats, with 7.5% of the vote.
The elections reveal the changing political dynamic in Nepal and will have significant regional geopolitical implications. Some of the immediate implications are:
First, the elections signify a generational change in Nepal’s leadership – an opportune change in a country where nearly 40% of the population is under-35 years of age. It has brought to fore the 36-year-old engineer turned rapper turned Kathmandu mayor, Balendra Shah. This is an outcome of the rising awareness among people, fuelled largely by the 2025 popular protests against the existing regime. Triggered by the government’s social media ban and spreading to include other issues like corruption, economic stagnation, entrenched patronage networks, nepotism and loss of jobs, the protests led to an attack on institutions and led to temporary government collapse. The drastic change in leadership witnessed in the 2026 elections is, in a way, a culmination of the earlier youth led movement.
Second, political stability has largely eluded Nepal ever since it ushered in the multiparty system in 1990. The country went through several political upheavals, including the decimation of the Hindu monarchy. The promulgation of a new Constitution in 2015, and the subsequent stand-off with India, did not make things better. The system saw a circulation of elite rule between Oli, Prachanda and Deuba. Those elite networks have now broken, and the country can start afresh.
Third, Balendra Shah’s record as Kathmandu’s mayor already throws light on his governance method. He has been known for executing technocratic style governance, focused on cleanliness, beautification, waste management, traffic control etc. Sometimes his policies have been targeted as being anti-poor and too focused on economic development. This technocratic style became visible soon after he assumed office upon oath taking. The new government has already started fulfilling its first 100 days agenda, including notable steps like corruption probes extending up to big political names, as well as a comprehensive educational overhaul.
Finally, Nepal sits at a precarious geopolitical juncture. It has a deeply embedded cultural relationship with India, while it is attempting to expand its strategic relationship with China. This balance, often leading to Nepal’s tilt towards China and diplomatic coldness with India, is now being redefined. Under the Communists, China had an oversized influence in Nepal, so much so that it even brokered an agreement to unite the two major communist factions into a single political outfit. Under the new regime, this has now broken. Due to Balendra Shah’s non-Communist, Hindu orientation, cultural and economic ties with India are expected to see a revival, even as economic compulsions will continue to lead Nepal to engage China. The role of Christian missionaries, especially in terms of proliferating organizations, is already reducing. This is partly due to drying up of funding from the USAID, which has become defunct under the Trump administration.
From India’s perspective, the new dispensation in Nepal and the promise of stability it brings offers a positive opening to reset relations, as the entrenched Communists invariably took an anti-India posture. Given the rising awareness among Nepal’s youth, India is now realizing that the existing model of patronage politics is no longer viable even in a small neighbourhood country like Nepal and will have to give way to an equitable partnership.