Search

AI Can Help Address Inequity — If Companies Earn Users' Trust - Harvard Business Review

jemputjembut.blogspot.com

Bullish predictions suggest that artificial intelligence (AI) could contribute up to $15.7 trillion to the global economy by 2030. From autonomous cars to faster mortgage approvals and automated advertising decisions, AI algorithms promise numerous benefits for businesses and their customers.

Unfortunately, these benefits may not be enjoyed equally. Algorithmic bias — when algorithms produce discriminatory outcomes against certain categories of individuals, typically minorities and women — may also worsen existing social inequalities, particularly when it comes to race and gender. From the recidivism prediction algorithm used in courts to the medical care prediction algorithm used by hospitals, studies have found evidence of algorithmic biases that make racial disparities worse for those impacted, not better.

Many firms have put considerable effort into combating algorithmic bias in their management and services. They often use data-science driven approaches to investigate what an algorithm’s predictions will be before launching it into the world. This can include examining different AI model specifications, specifying the objective function that the model should minimize, selecting the input data to be seeded into the model, pre-processing the data, and making post-processing model predictions.

However, the final outcome of deploying an algorithm relies on not only the algorithm predictions, but also how it will ultimately be used by business and customers — and this critical context of receptivity and adoption of algorithm is often overlooked. We argue that algorithm deployment must consider the market conditions under which the algorithm is used. Such market conditions may affect what/who, and to what extent the algorithm’s decisions will impact, and hence influence the realized benefits for users of using the algorithm.

For example, to help its hosts maximize their income (i.e., property revenue), Airbnb launched an AI algorithm-based smart pricing tool that automatically adjusts a listing’s daily price. Airbnb hosts have very limited information on competing Airbnb properties, hotel rates, seasonality, and various other demand shocks that they can use to correctly price their properties. The smart pricing algorithm was meant to help with this, incorporating relevant information on host, property, and neighborhood characteristics from the company’s enormous information sources to determine the best price for a property. In our recently published study, the average daily revenue of hosts who adopted smart pricing increased by 8.6%. Nevertheless, after the launch of the algorithm, the racial revenue gap increased (i.e., white hosts earned more) at the population level, which includes both adopters and non-adopters, because Black hosts were significantly less likely to adopt the algorithm than white hosts were.

In tests, the tool did exactly what it was supposed to. We found that it was perfectly race blind in that the prices of similar listings were reduced by the same amount regardless of the race of the host. The algorithm improved revenue for Black hosts more than it did for white hosts. This is because the property demand curve for Black hosts was more elastic (i.e., more responsive to price changes) than the demand curve for equivalent properties owned by white hosts. As the price reduction was the same, the number of bookings increased more for Black hosts than for white ones, leading to a higher increase in revenue for Black hosts than for white hosts. From a data-science perspective, it had a perfect deployment: This race-blind well-meaning algorithm aimed to provide financial benefits by improving the revenue of all adopters and to deliver social benefits by reducing the racial revenue gap among adopters.

In the real world, however, it was a different story. The algorithm launch ended up widening rather than narrowing the racial disparity on Airbnb. This unintended consequence could have been avoided by internalizing market conditions during algorithm deployment.

We determined that firms must consider the following market conditions during AI algorithm creation: 1) the targeted users’ receptivity to an AI algorithm, 2) consumers’ reactions to algorithm predictions, and 3) whether the algorithm should be regulated to address racial and economic inequalities by incorporating firms’ strategic behavior in developing the algorithm. Airbnb, for example, should have asked: 1) How will Airbnb hosts react to (more specifically, adopt) the algorithm? and 2) How can Black hosts be encouraged to adopt it? These market conditions determine the final market outcome (e.g., product price, property demand, benefits to users) of applying an AI algorithm, and thus should be analyzed and considered upfront.

How will an algorithm be perceived by the targeted users?

Airbnb’s smart-pricing algorithm increased daily revenue for everyone who used it. White hosts saw a bump of $5.20 per day, and Black hosts saw a $13.9 increase. The new pricing reduced economic disparity among adopters by 71.3%.

However, as Black hosts were 41% less likely than white hosts to adopt the algorithm, the outcome of the algorithm’s introduction was not quite satisfactory. For Black hosts that didn’t use the algorithm, the earnings gap actually increased. This leads to the following question: If you are the CEO of a company that wishes to root out racial inequity and is given an algorithm report of this kind, what do you hope to seed in the science and engineering management team?

To address Black hosts’ low receptivity to the new tool, Airbnb could encourage Black hosts to adopt the algorithm, for example, by rewarding Black users who try it out or sharing a detailed description and evidence of the benefits of using the algorithm. We also found that the racial adoption gap was more significant among hosts with a low socioeconomic status (SES), so targeting Black hosts in the lower SES quartiles would be most efficient.

To do this, however, it’s essential to understand why people are hesitant in the first place. There are many reasons why people may not be receptive to handing over control to an algorithm. For example, education and income have been found to explain a high technology adoption barrier for Black users, especially when using the technology is (financially) costly. Even if the technology is offered for free (e.g., Airbnb’s smart pricing algorithm), trust also plays a significant role: A working paper (Shunyuan Zhang coauthored with Yang Yang) indicated that raising awareness of racial bias would make disadvantaged groups less trustful and more hesitant to embrace algorithms in general, including the race-blind ones that offer financial, health, or education benefits to the users.

In conversations with an e-commerce company focused on used items, authors of the study learned that only 20% of the sellers used the free pricing tool offered by the company, making pricing inefficient and selling slow. A preliminary survey suggested that sellers may overestimate the value of their used items and may be unwilling to accept algorithm-predicted price suggestions; this is called the endowment effect. For example, imagine a seller lists a second-hand dress they believe is worth $15, but the pricing algorithm, which was trained on an enormous dataset and models, suggests $10, and the seller reacts negatively. In response to reactions like this, the company could explain to the seller how the $10 suggestion was made and presenting similar items that were priced and sold at $10. Providing such explanation increases the transparency of business operations and enhances customer trust.

Simply put, when incorporating differences in the adoption of AI algorithms across racial groups, firms should customize their algorithm promotion efforts and try to address the concerns of the users they most want to adopt it.

How will consumers react to the effects of an AI algorithm?

It is a mistake to see AI algorithms merely as models that output decisions and impact the people who receive those decisions. The impact goes both ways: how consumers (i.e., decision recipients) react to AI decisions will shape the effect of the algorithm on market outcomes.

Airbnb’s smart-pricing algorithm is a good example of this phenomenon. Assume that you are the CEO of Airbnb and are reporting on the algorithm developed by your company at a House Committee Hearing on equitable AI. You might be happy that your algorithm, conditional on adoption, could combat racial inequity. However, you could do more to mitigate racial disparity. You should consider the following key marketing conditions: 1) Black and white hosts may face different demand curves, and 2) Black hosts are less represented in the data used to train the AI algorithm. Specifically, the demand curve for Black hosts’ properties was more elastic than that for similar properties owned by white hosts. Different demand curves might arise from social discrimination, which leads guests to be more price sensitive to Black-owned properties than to white-owned ones.

As guests were more responsive to price reductions for Black-owned properties, incorporating this market condition when deploying an AI algorithm is critical. You can further reduce the revenue gap between Black and white hosts by directly using race or indirectly including closely or correlated characteristics in the algorithm. Ignoring the inherent differences in market conditions may lead to price suggestions that are farther from the optimal prices for Black hosts than from the optimal prices for white hosts. This is because Black hosts represent only 9% of Airbnb properties, whereas white hosts represent 80%.  

What should firms do?

If you are on an AI equity task force at the corporate or government level, what should you do when considering how to deploy an algorithm meant to mitigate racial disparities? If you were to sketch the ecosystem of the focal algorithm, who would the creators, the targeted users, and the algorithm decision receivers be? How would they react to the algorithm, and how would their reactions impact the algorithm’s final outcome?

First, really consider how the algorithm will be perceived by the targeted users. This will shape how it performs in the real world. Ask whether users are aware (or can be made aware) of how the algorithm works. If they know that your company is deploying a new algorithm meant to address an inequity, how will they react? If underrepresented users may feel pressured or feel that the algorithm may be biased against them, they will be less likely to use it. Take into account how historical discrimination and recent issues with underrepresentation in data sets may make your target users skeptical (e.g., arguably well-founded concerns in health care may drive inequality in Covid-19 vaccination).

Second, focus on building trust and help users understand what the algorithm is meant to do and how it works. If algorithm adoption is optional (as in the case of Airbnb), this process of considering whether users — particularly users from underrepresented groups — will understand, trust, and adopt the algorithm is even more important. Communicating clearly with them about the purpose of introducing the algorithm and how it works, as well as incentivizing them to use the algorithm, especially when it is more effective for the minority or gender based groups, is important. Make explaining how the initiative was launched to reduce racial inequities — and how it will do so — part of your rollout strategy.

***

Due to the scalability and value of accurate predictions, businesses will increasingly deploy and apply algorithms in their operations and services — and adoption will likely only increase. But companies need to address the concerns that algorithms might produce biased outcomes against the disadvantaged groups. Unfortunately, the common data science-driven approaches including processing data and calibrating model specifications are insufficient and inefficient. For business to best combat algorithmic bias issues, considering the perception and adoption of algorithms and the market conditions like the ones we have described should be a major part of rolling out algorithmic tools.

Done right, these tools may well mitigate the human biases and bridge the economic consequences arising from them. Done wrong, just by a few algorithms from established firms, may completely undermine and slow the AI algorithm deployment.

Adblock test (Why?)



"help" - Google News
September 17, 2021 at 07:19PM
https://ift.tt/3kk9lmU

AI Can Help Address Inequity — If Companies Earn Users' Trust - Harvard Business Review
"help" - Google News
https://ift.tt/2SmRddm


Bagikan Berita Ini

0 Response to "AI Can Help Address Inequity — If Companies Earn Users' Trust - Harvard Business Review"

Post a Comment

Powered by Blogger.