Ethical AI frameworks advocate for transparency, auditability, and accountability but do not define the technical steps needed to achieve them. Here, we compare eight well-known ethical AI guidelines (EU, OECD/G20, UNESCO, IEEE, Singapore, Montréal, Toronto, and Beijing) with the FAIR data principles, the FAIR for computational workflows, and the FAIR for research software (FAIR4RS) principles. We used a simple, qualitative review with a clear rubric (five levels from Strong to Weak) and coded each principle and sub-principle independently.
The analysis demonstrates a good conceptual alignment of the ethical AI and FAIR goals, especially around documentation, lawful access, and provenance. However, most texts do not mention concrete controls such as Globally Unique Persistent and Resolvable Identifiers (GUPRIs), machine-readable metadata (e.g., JSON-LD with community schemas), open protocols with authentication and authorisation processes, qualified links and structured vocabularies or ontologies, clear licences, and workflow and model execution provenance. Alignment is highest for data, lower for software, and lowest for workflows.
To help practitioners, e.g., researchers, model developers, and data stewards, we present links of specific FAIR solutions to the ethical goals they support. For example, GUPRIs support traceability, licencing allows lawful reuse, and provenance records promote reproducibility. We also note related community work on FAIR4ML for model artefacts. Our recommendation is to keep the high-level ethical aims but enrich them with FAIR-related technical solutions so that claims can be traced, analysed, checked, and trusted in real use cases.
