Artificial intelligence can help government agencies deliver better results, but there are underlying risks and ethical issues with its implementation that need to be resolved before AI becomes part of the fabric of government.

Based on insights from an expert roundtable led by the IBM Center for The Business of Government and the Partnership for Public Service, agencies will need to address multiple risks and ethical imperatives in order to realize the opportunity that AI technology brings. These include:

Creating Explainable Algorithms. Machine learning algorithms are only as good as the data provided for training. Users of these systems can take data quality for granted and can come to over-trust the algorithm’s predictions. Additionally, some ML models such as deep neural networks are difficult to interpret, making it hard to understand how a decision was made (often referred to as “black box” decision). Another issue arises when low-quality data (i.e., data that embeds bias or stereotypes or simply does not represent the population) is used in un-interpretable models, making it harder to detect bias. On the other hand, well-designed, explainable models can increase accuracy in government service delivery, such as a neural network that could correct an initial decision to deny someone benefits for which they are entitled.

Research into interpretability of neural networks and other kinds of models will help build trust in AI. More broadly, educating stakeholders about AI—including policymakers, educators and even the general public—would increase digital literacy and provide significant benefits. While universities are moving forward with AI education, government needs greater understanding of how data can impact AI performance. Government, industry, and academia can work together in explaining how sound data and models can both inform the ethical use of AI.

Read more