Author: Tiffany Wong, CFA
The pandemic has had lasting effects on the way we work as we move towards a digital workspace.
Social distancing has started a trend of remote working which introduces more flexibility. In the long term, the flexibility may allow and encourage more diversified candidates, such as single working parents or people of colour, to work in the demanding investment management industry. But are we at a stage yet where technology is truly inclusive and will support a flexible working style for candidates from all walks of life?
While we intend to focus on socioeconomic effects, the impact of the pandemic is unequal across intersectional groups, including gender, race and disability. In spite of efforts by governments, the pandemic is likely to lead to a deepening social gap due to (long periods of) unemployment, especially for working class roles, with less flexibility. Equities and fixed-income prices have been volatile due to the crisis which may affect the less financially educated if they piled their savings into assets with little understanding of their financial standing and risks involved.
Governments are also piling up in debt, which, as reported by the FT may negatively affect future funding in education. This could lead to a spike in numbers of drop-outs due to financial difficulties. Not to mention the disproportionate impact on mental health vulnerability among the lower-income households in the pandemic.
How might recruitment processes need to be adapted to tackle social inclusivity?
Although remote working could encourage more diversified candidates, the technology we use may not be sophisticated enough to deal with the challenges from a widening social gap. AI recruitment algorithms are prone to biasness, this could unfairly refuse the underprivileged an opportunity to obtain a role in the financial sector, even though they are equally or more capable than a candidate from a traditional background.
For example, Amazon’s hiring algorithm was found to be biased towards male candidates, as it was trained on old data where the majority of recruits were male. Similarly, decisions could naturally be skewed if AI tools were trained on historic data infiltrated with socio-economic biasness.
We have analysed how AI could be used in the recruitment process in Exhibit 1 below. As we see, the problems in sourcing and screening are existing issues. That in evaluation may become increasingly prevalent due to the Covid-19 induced trend of remote workforce and recruitment efforts.
It should be noted that, algorithms are trained with historic data and therefore can be ingrained with the systemic biasedness from past prejudices. If designed appropriately, however, algorithms can be a powerful tool to reduce human subconscious bias. For example, the use of ‘counterfactual fairness’ to ensure a model’s decision would not change even if sensitive factors such as ethnicity and social class were anonymised.
Artificial Intelligence (AI) is typically used in 3 steps of the recruitment process (WE Forum) (Exhibit 1)
|Steps of the the recruitment process||Use case||Potential Bias||Examples|
To attract a pool of applicants. AI may be used to augment the layout and phrasing of job advertisements.
Algorithms may skew the display of employment ads by gender, race and social class, affecting the audience it may reach.
Research shows that Facebook algorithms skewed the display of employment ads by gender and race.
To screen through cover letters and CVs to help identify the appropriate candidates by tracking keywords and establishing relative rankings
The details on cover letters and CVs may be used to infer social status in making the decisions.
|Sociolinguistics studies how factors including ethnicity and social class affect an individuals’ dialect, which could be interpreted by the machine.|
|Evaluation||To evaluate candidates based on information such as their body language in video interviews.||Humans of different social class may have recognisable facial, audio or body language cues that are unfairly interpreted by the algorithms in its assessments.||Hirevue, a recruitment start-up, recently had a complaint file against its potential bias and inaccuracy in its use of facial analysis among many techniques.|
How technology affects wealth creation for the lower social class
Given the pandemic is likely to create a widening wealth gap between social classes, what are the risks and opportunities that technology can bring to the lower social classes in financial management?
Algorithms are used to assess whether a person is suitable for a product. Innovative options are available for providers to assess suitability and appropriateness of products, such as gamification or use of algorithms. These could expose decision-makers to the same pitfalls as introduced by recruitment algorithms through bias.
For example, individuals may be refused access to life insurance products due to algorithms perceiving a certain social status to be higher risk erroneously, denying them protection to mortality events.
At the same time, innovation in technology is making investment products more accessible than ever. Fintech products usually require fewer administrative procedures for the less financially-literate. Product features, such as fractional stock-trading offers, make investing affordable and easier to understand.
However, we should not lose sight to how gamification of investing could encourage those that are less financially educated to take risks that they did not fully understand. The volatile markets following recent events could expose these vulnerable groups to downside investment risks, when they are highly dependent on their savings to pull through.
The challenge ahead
Post-pandemic and potential recession, the investment management industry, and the wider financial services industry will face an increasing challenge to continue to be socially inclusive, and conscious. Although there have been pitfalls in the technology we use, innovation can also keep the industry competitive and help make it inclusive for all.
Tiffany Wong is a manager at Deloitte within the Digital Risk area. She has a specific interest in AI and Machine Learning and has led/ supported multiple risk management workstreams for FS AI and Machine Learning products. She has recently written a chapter around 'AI and Business Ethics in Financial Markets' for the AI Book, a Wiley publication. She is also the working group co-lead at CFA UK AI and Machine Learning Working Group and a member of the Social Inclusive Working Group.
This article is written on behalf of the Social Economic Inclusion Working Group.