The primary issue with the bias amplification phenomenon in AI systems is that it leads to the reinforcement and exaggeration of existing biases present in the training data. This means AI systems, when trained on biased data, not only replicate those biases but intensify them, resulting in more pronounced and potentially discriminatory outcomes. This amplification can perpetuate societal inequalities and unfair treatment, making AI decisions less fair and ethical. Addressing this issue is crucial to ensure AI systems are developed with diverse, unbiased data and safeguards to mitigate these feedback loops of bias amplification.