The Pentagon is rapidly deploying artificial intelligence in combat operations — from drone targeting to battlefield surveillance — while gutting the safeguards meant to ensure these systems are safe, effective, and lawful. The consequences are not hypothetical. AI targeting errors have been linked to civilian deaths in active conflict zones, and are widely suspected to have contributed to a strike on a school in Minab, Iran that killed 175 people. But the Pentagon has scaled back the independent testing and evaluation programs that exist to catch these failures before they cost lives. Meanwhile, companies like Palantir and Anduril — whose defense revenue has grown exponentially since 2020 — are now shaping acquisition policy for the technologies they sell. The public deserves to know what AI systems the military is using, how they are being tested, and whether they comply with the laws of war. Congress has both the authority and the obligation to find out. Tell Congress: restore independent oversight of military AI, require transparency in Pentagon contracting, and protect civilians and civil liberties before these systems go further.