LLMs are just ML models trained on data, so are subject to the same biases as other models. But also, companies are literally building nuclear reactors to fuel the GPU cores needed for training and inference of LLMs. A lot of people are telling us how AI is going to change our lives for the better, but we should ask whose lives, and how exactly will they be improved? How are the models biased, and what is being done to fix bias in the models? For lessons/05_AI_intro/06_ai_ethics.md