Colorado is poised to become one of the first states to regulate how insurers can use big data and AI-powered predictive models to determine risk for underwriting. The Department of Insurance recently proposed new rules that would require insurance companies to establish strict governing principles on how they deploy algorithms and how they submit to significant oversight and reporting demands.
The draft rules are enabled by Senate Bill (SB) 21-169 , which protects Colorado consumers from insurance practices that result in unfair discrimination on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identify, or gender expression. SB 21-169 holds insurers accountable for testing their big data systems – including external consumer data and information sources, algorithms, and predictive models – to ensure they are not unfairly discriminating against consumers on the basis of a protected class.
The draft rules regulate the use of predictive models based on nontraditional factors including credit scores, social media habits, purchasing habits, home ownership, educational attainment, licensures, civil judgments, court records, and occupation that does not have a direct relationship to mortality, morbidity, or longevity risk for insurance underwriting. Insurers that use this sort of nontraditional information or algorithms based on it will need to implement an extensive governance and risk management framework and submit documentation to the Colorado Division of Insurance. New York City recently postponed enforcement of its AI bias law amid criticism of vagueness and impracticability, as we recently reported. In contrast, Colorado’s draft insurance rule is among the most detailed AI bias regulations to come out yet. AI regulation is a rapidly growing landscape, and these draft rules may be a sign of what’s to come