Abstract graphic image showing a network of connected dots with the AI report and HAI logos

Data is key to building responsible AI

For three years in a row, LinkedIn has contributed insights for the Stanford Institute for Human-Centered Artificial Intelligence (HAI) AI Index Report that measures and evaluates the rapid rate of AI advancement. This year’s AI Index is one of the most comprehensive reports to date. Similar to past years, the index takes a cross-industry approach, analyzing national economies, jobs, ethics, policy, and research.

The report includes LinkedIn’s unique insights aggregated from across our over 810 million members and 200 countries. In partnership with Stanford HAI, combined with our unmatched real-time labor market data and talent trends, we hope to enable leaders and decision makers to take meaningful action to advance AI responsibly and ethically with humans in mind.

One of the datasets we have contributed is AI skill penetration rates which show the intensity with which LinkedIn members use AI skills in their jobs. Based on this data we can see that AI talent is not distributed evenly by geography with India leading the world in AI skill penetration – 3.09 times the global average from 2015 to 2021 – followed by the United States, Germany, China, Israel, and Canada.


Second, we see that gaps persist even as AI becomes globally ubiquitous. Amongst the 15 countries listed, the AI skill penetration rates for females are higher than males in only 6 countries – India, Canada, South Korea, Australia, Finland, and Switzerland.


Measuring key trends and surfacing gaps like these is an important first step toward creating AI products and services that are fair and responsible by design. For LinkedIn, this means that everything we build is intended to work as part of a unified system, that the right protections are in place, and that we are mitigating any unintended consequences.

A key avenue we do this is through the LinkedIn Fairness Toolkit (LiFT), which makes use of commonly-considered fairness definitions to enable the measurement of fairness in large-scale machine learning workflows. By making the same tools we use available to other public and private institutions, we are putting processes in place that bring fairness to AI-driven product design.

With resources like the 2022 AI Index Report, it is our hope that leaders and decision makers can craft programs and policies that make the acquisition of AI skills more responsible and equitable. AI is the tool of the present and future and it is up to us to help level the playing field so all can access and reap its benefits.

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

AI Skill Penetration Rate shows the prevalence of AI skills across occupations, or the intensity with which LinkedIn members use AI skills in their jobs. It is calculated by computing the frequencies of LinkedIn users’ self-added skills in a given area from 2015–2021, then reweighting those figures by using a statistical model to get the top 50 representative skills in that occupation. For global comparisons, the relative penetration rate of AI skills is measured as the sum of the penetration of each AI skill across occupations in a given country or region, divided by the global average across the same occupations. For example, a relative penetration rate of 2 means that the average penetration of AI skills in that country or region is 2 times the global average across the same set of occupations.