A few questions to consider:
A study recently published in the journal Science shows that artificial intelligence (AI) technologies display racial and gender bias. The machines acquire these biases, which researchers discovered by studying the machine’s word associations, from texts created by human beings, that is, from our cultural artifacts. How should AI development proceed given such findings and in light of the expectation that AI will come to play a significant role in social life? How can we ensure that the machines we build, some of which may then go on to build their own machines, do not duplicate the moral failings of their creators?
Self-driving cars may be commonplace within the decade. Last year, Mercedes-Benz announced that if forced to choose between saving any number of pedestrians and saving the car’s passengers, their cars will always prioritize the lives of the passengers. From a purely business-oriented perspective, Mercedes-Benz has probably made the correct choice, for as consumers we want to feel safe in our cars. But is this the morally right choice? Should industry executives and lawyers decide such matters on their own, basing their decisions solely or largely on their fiduciary duties to shareholders? If not, who should decide such things, and on what basis?
Climate change is already disrupting the lives of millions of people around the world and the disruptions—severe weather, rising sea levels, conflicts over resources, mass migration, etc.—are likely to accelerate and to grow increasingly pronounced. Decisions we make today will shape the world into which future generations are born. What obligations do we have to these future people, and how can we meet these obligations given the scientific and political complexities that attend climate change? What about the non-human animals impacted by climate change? What are our obligations to them? More generally, what makes the environment valuable, such that we ought to protect and preserve it in the first place?
These questions comprise a very small sample of the crucially important questions we ask in the subfield of philosophy known as applied ethics, an area of philosophical research in which CSU’s Department of Philosophy is especially well-represented. We teach our students how to think creatively and rigorously about our world—how it is, how it might be, and how it ought to be—both because doing so is intrinsically valuable and fun and because the absence of clear, careful thinking so often leads to confusion, error, and calamity. Our students learn how to think fundamentally about things like scientific innovation, the nature of knowledge, social change, and morality. They learn how to step back from the flux of the 24-hour news cycle and the demands of the most recent app to unearth and examine the presuppositions of what they see and hear. In the classroom, professors of philosophy work to provide our students with the tools they need to meaningfully engage with the world around them, to help them lead their own lives rather than to be led from the outside.
Learn more about the Department of Philosophy.
Like the Department of Philosophy on Facebook.