A lawsuit alleges ChatGPT encouraged a suicidal teenager by drafting their note and discouraging seeking help. Meanwhile, 44 state attorneys general warned AI companies: “If you knowingly harm kids, you will answer for it.” This is not hyperbole—it’s a dire failure in ethics and safety. Instead of treating AI dangers as “teething problems,” media must shine a light on these harms and demand accountability. Corporations creating tools that enable self-harm must be held responsible. Add your name to demand responsible reporting on AI risks—especially to protect our children.