Bias and Discrimination in Decentralized AI Decision Systems
DOI:
https://doi.org/10.63345/Keywords:
Decentralized AI, Fairness, Discrimination, Federated Learning, DAO Governance, Blockchain, Non-IID Data, Equalized Odds, Quadratic Voting, Bias MitigationAbstract
Decentralized AI decision systems—spanning federated learning, peer-to-peer optimization, DAO-governed models, and blockchain-orchestrated inference—promise resilience, privacy, and transparency by distributing data, compute, and control. Yet decentralization does not automatically guarantee fairness. This paper examines how bias and discrimination emerge and persist when decision-making is pushed to the edge or collectively governed, and how those dynamics differ from centralized pipelines. We synthesize the literature on algorithmic bias, fairness metrics, federated learning with non-IID data, and token-based governance to articulate a socio-technical framework for risk. We then propose and test a mixed-methods methodology combining (i) a conceptual risk model mapping bias vectors (data, model, governance, incentive, and identity layers), (ii) a simulation on heterogeneous subpopulations under three deployment regimes—centralized baseline, decentralized stake-weighted governance, and decentralized with fairness and governance mitigations, and (iii) a statistical analysis using standard equality-of-opportunity and calibration measures. In a synthetic evaluation configured to stress real-world non-IID skew and wealth concentration in governance, a naïve decentralized regime amplifies parity gaps (e.g., equalized-odds TPR gaps increase by ~3–5 percentage points versus centralized), primarily due to (a) minority underrepresentation in local silos, (b) emergent power-law concentration in token voting, and (c) protocol-level incentive misalignment favoring short-term accuracy. A mitigated design—differentially private group-reweighted training, group distributionally robust optimization (Group DRO), model-card/datasheet governance requirements, and quadratic voting with identity checks—reduces disparities to below centralized baselines on core metrics while preserving decentralization benefits. We conclude with implementation guidance: align incentives with fairness constraints, measure and publish group-level performance continuously, and embed governance primitives (e.g., quadratic funding/voting, appeals, rotating audits) capable of resisting both model and governance capture.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Scientific Journal of Artificial Intelligence and Blockchain Technologies

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The license allows re-users to share and adapt the work, as long as credit is given to the author and don't use it for commercial purposes.