Making AI work for everyone
To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by the systems. Exploited and marginalized communities are often left to deal with the consequences of AI systems without having much say in their development. Research has shown that direct and genuine community involvement in the development process is important for deploying technologies that have a positive and lasting impact on those communities.
This is a role that civil society can take; making sure the communities they serve are reflected in this data that is used to train AI and how that training plays out. The difficulty, of course, is how to facilitate that engagement.