Lawmakers on the General Legislation Committee want states to assess the growing role of artificial intelligence (AI) in state agency decision-making. AI is a broad term for powerful mathematical tools that can analyze large amounts of data much faster than humans. AI created a fake photo of Pope Francis as a fashion influencer and another AI system is being sued for defamation. As a tool, AI is neither good nor bad, it’s about how it’s used.
Done poorly, AI can deny appropriate access to healthcare. But done well, it has the potential to improve care by eliminating individual biases, reducing disparities and promoting fairness. Legislators are empowered to monitor the use of AI in state services
Despite its bad reputation and scary name, healthcare has adopted AI throughout the system to improve care and save money. Last year, the FDA approved 91 new AI-enabled medical devices, up from five in 2015. An AI system shows promise in detecting breast cancer missed by doctors. I recently found hearing aids that are trained on millions of real-world sounds. They learn from experience to help my brain better understand what I’m hearing.
It has been estimated that AI could save up to $360 billion per year in healthcare costs by making care safer, improving quality, and reducing the administrative burden on burned-out clinicians. We can really use those savings to make care more affordable and accessible.

Despite the potential, AI systems have also raised serious concerns about bias and discrimination. In 2019, researchers found that an AI system, widely used by hospitals to determine patient needs, was inappropriately denying care to black patients. It was not designed to discriminate, but was created using data that reflected, and therefore historically underserved, black patients. That system and others like it were used to identify between 150 and 200 million patients in need of care. As soon as it was found, the bias was removed, and the system was corrected. While this case has a happy ending, it highlights the need to regularly monitor, test, and correct biased AI systems. It is also important to note that there is great variation in care by race without AI due to provider and institutional biases and stereotyping. Done well, AI can significantly reduce these biases.
AI has a history of inappropriately limiting care for people with disabilities. Here in Connecticut, a new AI system improperly denied a Medicaid patient with a serious medical condition the home health care he was receiving. The patient experiences an unexpected drop seizure that causes him to fall to the ground without warning. There are things she can’t safely do alone, like cooking on the stove. Unfortunately, the AI system that the state bought didn’t realize that sometimes it could accomplish these tasks alone, which could quickly become very dangerous. Sheldon Tubman, of Disability Rights Connecticut, helped the patient appeal the state’s decision and it was overturned. According to Mr. Tubman, the state has come to realize that AI can help make care decisions less subjective, but it should not make the final decision.
Concerned about civil rights, the Attorney General of the District of Columbia has called for banning the use of AI altogether. But it will miss AI’s potential to improve safety and equity while freeing up resources to improve access to care. Connecticut’s Advisory Committee on AI to the US Commission on Civil Rights has some smart recommendations for keeping AI safe.
Data protection should be tight and data should never be sold. AI systems must be tested before implementation, audited regularly, and problems fixed when found. AI should be a tool to help assess the needs of states and others, but humans should make the final decisions. This bill should pass.