There may be some very compelling tools and platforms that promise fair and balanced AI, but tools and platforms alone won’t deliver ethical AI solutions, says Reid Blackman, who provides avenues to overcome thorny AI ethics issues in his upcoming book, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent and Respectful AI (Harvard Business Review Press). He provides ethics advice to developers working with AI because, in his own words, “tools are efficiently and effectively wielded when their users are equipped with the requisite knowledge, concepts, and training.” To that end, Blackman provides some of the insights development and IT teams need to have to deliver ethical AI.
Don’t worry about dredging up your Philosophy 101 class notes
Considering prevailing ethical and moral theories and applying them to AI work “is a terrible way to build ethically sound AI,” Blackman says. Instead, work collaboratively with teams on practical approaches. “What matters for the case at hand is what [your team members] think is an ethical risk that needs to be mitigated and then you can get to work collaboratively identifying and executing on risk-mitigation strategies.”