[ad_1]
President Biden is taking new actions to ensure that the rapid advancement of artificial intelligence technology is well-managed. The Biden Administration recently released a blueprint for an “AI Bill of Rights,” a set of five recommendations to ensure that artificial intelligence systems are safe, equitable, optional, and most of all, ethical.
Unlike the actual Bill of Rights, this document isn’t legally binding. Rather, The blueprint exists to formalize best practices from major players in the A.I. and machine learning space. Those actions include ensuring that A.I. is not biased due to bad data, providing notices of when automation is being used, and providing human-based alternatives to automated services, according to Venkat Rangapuram, CEO of data solutions provider Pactera Edge.
Here are the five “rights” outlined by the White House’s blueprint, and how businesses should utilize them when developing and using automated systems.
1. Ensure automated systems are safe and effective.
The safety and security of users should always be of top importance in the development of A.I. systems, according to the blueprint. The administration argues that automated systems should be developed with public input, allowing consultation from diverse sets of people able to identify potential risks and concerns, and systems should undergo rigorous pre-deployment testing and monitoring to demonstrate their safety.
One example of harmful A.I. mentioned in the document cites Amazon, which installed A.I.-powered cameras in its delivery vans in order to evaluate the safety habits of its drivers. The system incorrectly penalized the drivers when other cars cut them off or when other events beyond their control took place on the road. As a result, some drivers were ineligible to receive a bonus.
2. Protect users from algorithmic discrimination.
The second right deals with the tendency of automated systems to “produce inequitable outcomes” by using data that fails to account for existing systemic biases in American society, such as facial recognition software that misidentifies people of color more often than white people, or an automated hiring tool that rejects applications from women.
To combat this, the blueprint suggests utilizing the Algorithmic Bias Safeguards for Workforce, a document containing best practices developed by a consortium of industry leaders including IBM, Meta, and Deloitte. The document illustrates steps for educating employees about algorithmic bias, as well as instructions for implementing safeguards into the workplace.
3. Protect users from abusive data policies.
According to the third right, everyone should have agency over how their data is used. The proposal suggests that designers and developers of automated systems should seek user permission and respect user decisions regarding the collection, use, access, transfer, and delivery of personal data. The blueprint adds that any consent requests should be brief, and be understandable in plain language.
Rangapuram says that designing your automated systems to be continually learning without coming across as overbearing is a “tough balance” to find, but adds that allowing individual users to determine their own level of comfort and privacy is a good first step.
4. Provide users with notices and explanations.
Consumers should always know when an automated system is being used, and be given enough information to understand how and why it contributes to outcomes that impact them, according to the fourth right.
In general, Rangapuram says that negative social sentiment toward corporations collecting data could negatively affect the progression of new technology, so making clear how and why data is being used has never been more vital. By educating people about their data, businesses can garner trust among their users, which could lead to a greater willingness by those users to share their information.
5. Offer human-based alternatives and fallback options.
According to the blueprint, you should be able to opt out of using automated systems in favor of a human alternative. At the same time, automated systems should have human-based backup plans in case of technology problems. As an example, the blueprint highlights customer service systems which use chatbots to answer common customer complaints, but will redirect users to human agents to deal with more complex problems.
Consider test-piloting a self-driving car; while the system may work perfectly, “you’re still going to want a steering wheel in case something happens,” says Rangapuram.
[ad_2]
Ben Sherry
Source link
