5 ideas to make data work for good

It's the time of year for reflections and how to apply learnings moving forward. Doing this exercise with a focus on artificial intelligence (AI) and data might never have been more important. The release of ChatGPT opened up such a fascinating perspective on the future - we can interact with a seemingly intelligent AI that summarizes complex texts, spits out strategies and writes quite solid arguments - as it is frightening ("the end of the truth".

What moral and practical compass should guide humanity in the face of data-driven technology? To answer this question, it's worth turning to non-profit innovators, entrepreneurs who focus on solving deep-rooted societal problems. Why can they be useful? First, they are adept at spotting unintended consequences of technology early on and figuring out how to mitigate them. Second, they innovate with technology and create new markets, guided by ethical considerations. So here are five principles, distilled from examining the work of over 100 carefully selected social entrepreneurs around the world, that shed light on how to build a better path:

Artificial intelligence must be associated with human intelligence

AI is not smart enough to interpret our complex and diverse world. She's just bad at understanding the context. That's why Hadi Al Khatib, founder of Mnemonic, has set up an international network of humans to mitigate the mistakes of technology. They save eyewitness accounts of potential war crimes – now mostly in Ukraine, formerly in Syria, Sudan, Yemen – from being deleted by YouTube and Facebook. The platforms' algorithms do not understand the local language or the political and historical circumstances in which these videos and photos were taken. Mnemonic's network securely archives digital content, verifies it - yes, including with the help of AI - and makes it available to prosecutors, investigators and historians. They provided key evidence that led to successful prosecutions. What is the lesson here? The better the AI ​​seems, the more dangerous it becomes to trust it blindly. Which brings us to the next point:

AI cannot be left to technologists

Social scientists, philosophers, changemakers and others need to come to the table. Why? Because the data and cognitive models that train algorithms tend to be biased, and computer engineers are unlikely to be aware of this bias. A growing body of research has found that from health care to banking to criminal justice, algorithms have systematically discriminated in the United States, primarily against black people. Biased data entry means biased decisions or as the saying goes: garbage in, garbage out. Gemma Galdon, founder of Eticas, works with companies and local governments on algorithmic audits, to prevent this. Black Lives Matter, founded by Yeshi Milner, forges alliances between organizers, activists and mathematicians to collect data from communities that are underrepresented in most datasets. The organization has been a key force in shedding light on the fact that the death rate from Covid-19 was disproportionately high in black communities. The lesson: in a world where technology has an outsized impact on humanity, technologists must be aided by humanists and communities with lived experience of the problem at hand, to prevent machines from being driven with the wrong models and bad inputs. Which brings us to the next point:

It's about people, not products

Technology must be conceptualized beyond the product itself. How communities use data, or rather: how they are empowered to use it, is critically important to impact and results, and determines whether a technology leads to more harm or good in the world. . A good example is the social networking and knowledge exchange application SIKU (named after the Inuktitut word for sea ice) developed by the

5 ideas to make data work for good

It's the time of year for reflections and how to apply learnings moving forward. Doing this exercise with a focus on artificial intelligence (AI) and data might never have been more important. The release of ChatGPT opened up such a fascinating perspective on the future - we can interact with a seemingly intelligent AI that summarizes complex texts, spits out strategies and writes quite solid arguments - as it is frightening ("the end of the truth".

What moral and practical compass should guide humanity in the face of data-driven technology? To answer this question, it's worth turning to non-profit innovators, entrepreneurs who focus on solving deep-rooted societal problems. Why can they be useful? First, they are adept at spotting unintended consequences of technology early on and figuring out how to mitigate them. Second, they innovate with technology and create new markets, guided by ethical considerations. So here are five principles, distilled from examining the work of over 100 carefully selected social entrepreneurs around the world, that shed light on how to build a better path:

Artificial intelligence must be associated with human intelligence

AI is not smart enough to interpret our complex and diverse world. She's just bad at understanding the context. That's why Hadi Al Khatib, founder of Mnemonic, has set up an international network of humans to mitigate the mistakes of technology. They save eyewitness accounts of potential war crimes – now mostly in Ukraine, formerly in Syria, Sudan, Yemen – from being deleted by YouTube and Facebook. The platforms' algorithms do not understand the local language or the political and historical circumstances in which these videos and photos were taken. Mnemonic's network securely archives digital content, verifies it - yes, including with the help of AI - and makes it available to prosecutors, investigators and historians. They provided key evidence that led to successful prosecutions. What is the lesson here? The better the AI ​​seems, the more dangerous it becomes to trust it blindly. Which brings us to the next point:

AI cannot be left to technologists

Social scientists, philosophers, changemakers and others need to come to the table. Why? Because the data and cognitive models that train algorithms tend to be biased, and computer engineers are unlikely to be aware of this bias. A growing body of research has found that from health care to banking to criminal justice, algorithms have systematically discriminated in the United States, primarily against black people. Biased data entry means biased decisions or as the saying goes: garbage in, garbage out. Gemma Galdon, founder of Eticas, works with companies and local governments on algorithmic audits, to prevent this. Black Lives Matter, founded by Yeshi Milner, forges alliances between organizers, activists and mathematicians to collect data from communities that are underrepresented in most datasets. The organization has been a key force in shedding light on the fact that the death rate from Covid-19 was disproportionately high in black communities. The lesson: in a world where technology has an outsized impact on humanity, technologists must be aided by humanists and communities with lived experience of the problem at hand, to prevent machines from being driven with the wrong models and bad inputs. Which brings us to the next point:

It's about people, not products

Technology must be conceptualized beyond the product itself. How communities use data, or rather: how they are empowered to use it, is critically important to impact and results, and determines whether a technology leads to more harm or good in the world. . A good example is the social networking and knowledge exchange application SIKU (named after the Inuktitut word for sea ice) developed by the

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow