The Risks of AI Implementation at the Department of Veterans Affairs.

A Growing Concern:

The Department of Veterans Affairs (VA) is facing significant scrutiny over its recent operational changes, particularly those driven by the involvement of DOGE staffers. Critics argue that these changes, including introducing artificial intelligence (AI) tools, could jeopardise the care and services provided to veterans.

Representative Gerald Connolly, ranking member of the House Oversight Committee, has voiced strong concerns, stating that “DOGE’s actions at the VA are putting veterans’ lives at risk.” He further emphasised that veterans may lose access to essential care due to what he describes as a lack of expertise and oversight under the leadership of former President Donald Trump and Elon Musk.

Concerns from VA Employees

Employees within the VA have expressed unease about the changes being introduced by DOGE staffers. According to one employee, these individuals appear to lack a fundamental understanding of the agency’s operations. “These people have zero clue what they are working on,” one VA employee told *WIRED*. This sentiment reflects a broader concern about the potential impact of inexperienced decision-makers on an organisation as critical as the VA.

Despite repeated requests for comment, neither the VA nor key individuals involved, Volpert, Roussos, Fulcher, Rehling, or Koval, have responded to inquiries. This lack of transparency has only heightened apprehension among stakeholders.

The Role of Sahil Lavingia and AI Integration

Sahil Lavingia, a key figure in this transformation, is leveraging his experience to shape the VA’s future direction. Lavingia previously led Gumroad, a platform that achieved financial stability by automating processes and minimising human involvement. In an October 2024 blog post on his personal website, Lavingia detailed how Gumroad succeeded by replacing manual processes with automation, pushing costs onto customers, and operating with minimal staff.

He wrote:

“Today, humans are necessary for stellar customer service, crisis management, regulatory compliance and negotiations, property inspections, and more. But it won’t be long until AI can do all of the above.”

This philosophy seems to underpin Lavingia’s approach at the VA. According to sources familiar with his work, he advocates adopting an AI tool called OpenHands. This tool is designed to write code and streamline development processes within the agency. Lavingia has reportedly prioritised integrating OpenHands into the VA’s technological framework, describing it as a key initiative for both the Chief of Staff and the Secretary.

Potential Risks of AI Adoption

While automation and AI promise to improve efficiency, their implementation within a federal agency like the VA raises significant concerns. Two VA tech workers have highlighted several issues related to OpenHands. First, they noted that sensitive information stored in GitHub repositories, such as veterans’ social security numbers, banking details, and medical histories, could be at risk if AI tools are improperly used or inadequately secured.

One tech worker explained that federal systems must meet stringent security classifications before new tools can be adopted. However, they believe that OpenHands has not undergone sufficient vetting for government use. “Any programming tools or applications that you use in federal systems have to meet a bunch of security classifications,” they said. The lack of proper evaluation could expose critical systems and data to vulnerabilities.

Another concern involves data privacy. If OpenHands scripts or accesses sensitive information without adequate safeguards in place, it could lead to breaches that compromise veterans’ personal data, violate privacy standards, and erode trust in an institution designed to serve those who have sacrificed for their country.

Operational Challenges and Procedural Oversight

The introduction of AI tools like OpenHands also raises questions about procedural adherence within the VA. Employees have noted that standard protocols are not being followed during this transition. One source stated: “They’re not following any of the normal procedures, and it’s putting people at risk.” They warned that system failures resulting from rushed or poorly executed implementations could disrupt veterans’ access to essential benefits.

Such disruptions would have far-reaching consequences for veterans who rely on these services for their well-being. As one employee put it: “These are people who have given pieces of themselves to their country, and they deserve more respect than that.”

Balancing Innovation with Responsibility

While technological innovation can potentially transform public services like those offered by the VA, it must be approached responsibly. Integrating AI tools should not come at the expense of security or service quality, especially when dealing with sensitive data and vulnerable populations.

The concerns raised by VA employees highlight the need for a more cautious and transparent approach to implementing new technologies within government agencies. This includes conducting thorough security assessments, adhering to established procedures, and ensuring that all stakeholders are adequately informed and prepared for change.

Summary.

The ongoing changes at the Department of Veterans Affairs underscore both the opportunities and challenges associated with adopting emerging technologies in public institutions. While automation and AI can streamline processes and reduce costs, their implementation must be carefully managed to avoid unintended consequences.

As debates continue over the role of DOGE staffers and AI tools like OpenHands within the VA, policymakers and agency leaders must prioritise accountability and security. Veterans deserve nothing less than a system that respects their sacrifices while delivering reliable and high-quality care.