As AI systems become more ubiquitous in our lives, it is crucial that the personal and sensitive data collected to train algorithms isn’t used in the wrong context and that organizations understand the risks associated with using the technology, so that they take the necessary steps to protect consumers’ privacy. While the current headlines primarily focus on the risks linked to generative AI, it is essential to remember that the socio-economic benefits of data-enabled technologies such as AI are manifold, providing organizations with valuable insights to improve products, services, research and address socio-economic challenges. It is, therefore, essential that data-related regulations do not impede the endless opportunities it can deliver but encourage the deployment of AI technologies in ethical ways, protect privacy, eliminate bias and promote fairness and equity in line with core U.S. values.
This session will focus on the power of personal data in delivering data-enabled intelligent technologies for the interest of society as a whole and the challenges involved by discussing the complex interplay between data, AI, privacy, and the protection of civil rights against possible harmful automated decision-making. It will explore the potential of PETs to drive positive change while protecting privacy during the collection, processing, analysis, and sharing of data and to ultimately build trust with the public to empower them to participate in a data economy that benefits society equitably. Speakers will discuss the latest initiatives and regulatory work undertaken in these areas, such as the AI Bill of Rights released by the White House in October 2022; NIST’s Artificial Intelligence Risk Management Framework, the OSTP’s National Strategy to Advance Privacy-Preserving Data Sharing and Analytics; NTIA’s recent AI Accountability Request for Comment; the FTC’s guidance on algorithmic bias; and ask how technical, legal, and policy experts can collaborate to shape a US strategy and policies on data and AI that balance the tension between the protection of individuals against harmful effects and the promotion of positive innovation,
- As AI systems rely on the availability and accessibility to large amounts of data – sometimes personal and highly sensitive data – how can it be ensured that enough data, representative of all groups of society, is available to accurately train AI systems while simultaneously preserving data privacy and avoid bias, discriminatory or unfair outcomes?
- What role can PETs play, and to what extent can they help uphold commitments to equity, transparency, and accountability? What needs to be done to accelerate their uptake? What could a future data ecosystem that effectively incorporates PETs look like?
- How can principles such as fairness, transparency, accountability, equity and explainability (all included in the White House AI Bill of Rights) truly be operationalized for commercial and government use?
- Does the lack of a comprehensive federal privacy law undermine the possible advancement of responsible artificial intelligence-based technologies? To what extent do the discussions around AI regulation differentiate non-personal data from personal data and non-sensitive data from sensitive data?
- What can be done to build confidence and empower citizens to access, manage and share their data with private and public organisations to bring unprecedented benefits to broader society?