Welcome to AI-Cyberpriv
Artificial Intelligence (AI) and Machine Learning (ML)
Legal, Risk and Compliance Support
Are the hassles of managing your business’s day-to-day AI, ML legal, risk and compliance activities getting you down?
Today, more and more companies are introducing complex algorithmic and machine learning-based systems into their business processes to help improve efficiencies, accelerate performance, and differentiate themselves from their competitors. Consumers are also becoming more informed about the dangers associated with using automated decision-making systems. That’s why there are increasing calls from consumers, governments, users, and regulators for AI vendors to explain how their “black box” algorithmic decision-making systems operate. Vendors should also be held accountable if anything goes wrong as a result of using their technologies.
Legal and regulatory changes are already in motion (particularly in the US, UK, and EU) to ensure that ALL vendors of automated decision-making systems conduct algorithmic audits (AI Audits). If you are one of these vendors, you need to consider conducting an AI Audit. Do your systems rely on data analytics and cognitive technology-based software algorithms? Do you use these systems to make decisions which could somehow affect human beings, including algorithmic systems that are used in recruitment and to decide credit worthiness, like linear regression, neural networks, decision trees, and other learning algorithms? Here’s why you need an AI audit:
- To evaluate the impact of automated decision making systems by helping you to identify and mitigate any legal and ethical issues. These could include failure to monitor for unintended outcomes, potential for bias or potential procedural fairness violations.
- To facilitate compliance with legal and regulatory requirements (e.g. local employment laws, GDPR, and other relevant legal regulations /standards.
- To help identity and recommend the appropriate governance, oversight and/or design recommendations for their respective ”black box” automated decision-making systems
- To identify and provide a mechanism for greater openness and transparency for public consultation, along with an external review of the design and deployment of automated decision systems in both the public and private sectors.
On 19 February 2020 the European Commission (EC) published a package of initiatives (the AI and Data Package) on Europe’s “digital future” which includes an “AI White Paper ” in which the EC suggests that a new EU regulatory framework is required which would:
- apply to products and services “relying on AI”; and
- need to be defined with sufficient flexibility to accommodate technical progress, while being precise enough to provide legal certainty.
In the ‘AI White Paper’, the EC envisages :
“a risk-based approach in which new mandatory obligations would apply to AI applications identified as “high risk,” while the current regulatory framework (and potentially a voluntary certification approach) would apply to non-high-risk applications.
AI applications would normally be considered “high risk” only when they are employed: (i) in a sector where significant risks can be expected to occur (e.g., healthcare, transport, energy and parts of the public sector); and (ii) in such a manner that significant risks are likely to arise (e.g., those that produce legal or other significant effects on individuals, pose a risk of injury, death or significant damage or produce effects that cannot reasonably be avoided). However, the EC suggests that certain applications may be defined as high risk per se, mentioning as examples recruitment, workers’ rights and remote biometric identification (e.g., facial recognition).
The AI White Paper lays out a range of features that could be included in future mandatory requirements for high-risk applications. These include training data, data and record-keeping; information to be provided; robustness and accuracy; human oversight; and specific requirements for particular applications, such as remote biometric identification.
How might the mandatory requirements work in practice?
Data used to train AI systems would be required to meet EU safety standards, not lead to prohibited discrimination and protect privacy and personal data.
Companies could also be required to keep records regarding the data used to train and test AI systems and in some cases the data sets themselves.
Companies could be required to provide information on AI systems’ capabilities and limitations; to inform citizens when they are interacting with AI systems; and to ensure that AI systems are robust and accurate, that outcomes are reproducible and that AI systems can deal with errors and inconsistencies.”
AI-CyberPriv can help you to get ahead of the curve by conducting an algorithmic audit (AI Audit) and/or algorithmic impact assessment (AIA) of your automated decision-making systems (AI/ML systems).
Our Algorithmic Bias Auditors will review your new and existing apps and systems, logging and tracking each significant algorithm, its objectives, its input and output, related human value judgments and consequences.
AI-CyberPriv can also help you to develop a framework to improve your understanding of, and mitigate any risks associated with, the AI/ML system. Our experienced team of experts can also assist by assessing, identifying, and providing you with the resources and tools you need to design, implement, and meet the appropriate governance, oversight reporting, and audit requirements.
Our AI Audits and AIAs can be designed to be integrated or run alongside your existing annual IS/GDPR audits, systems, and processes.
That’s where AI-CyberPriv Comes in
AI-CyberPriv is a consultancy firm dedicated to providing virtual support in the United Kingdom and European Union.
We offer an array of AI legal, risk, and compliance support services and advice.