Fundraising AI Forum 2021

I participated in #FUNAI2021. You can find the slides from my presentation here.

Extended Abstract

Over the past five years there has been a rise in public awareness about the efficacy of social media for targeted advertising. Most notoriously, social media was heavily leveraged in both the Obama 2012 (Gunn & Anja Anaheim, 2016) and Trump 2016 political campaigns to generate advertising advantages. In the case of the latter, illegal and unethical use of social media data by the defunct Cambridge Analytica company facilitated an unprecedented advantage by leveraging intimate personal data acquired from Facebook’s servers (Isaak & Hana, 2018). Perhaps more than any other case, this has facilitated public skepticism about targeted advertising. Yet, despite such increased public scrutiny and concerns about privacy (Gruzd & Hernández-García, 2018), evidence suggests that targeted advertising decreases advertisement avoidance, potentially decreasing advertising costs for the organizations that employ it (Jung, 2017).

Should charities follow suit? Evidence from two prior studies conducted by researchers at Dalhousie University suggest that social media can similarly be leveraged to conduct targeted advertising. In the first study, publicly accessible Twitter data was used to successfully predict political donations among 438 Twitter users with 70% accuracy (Conrad & Keselj, 2016). In the second study, Twitter data was used to predict donations to charities with 71% accuracy (Calix Woc, 2020). These results could likely be improved further, lending confidence that such data could be leveraged to conduct targeted donation asks, ultimately decreasing charities’ costs of prospecting online. This would certainly be welcome in the era of Covid-19 and our increasingly digital world. However, the use of such targeted advertising techniques could also increase privacy concerns among donors and stakeholders, even when only leveraging publicly accessible data (Jung, 2017).

It is now critically important to invest in an open science research programme on how artificial intelligence and social media can be ethically leveraged in the donor prospecting process. Open science can mean many things but has been formally described as “transparent and accessible knowledge that is shared and developed through collaborative networks” (Vicente-Saez & Martinez-Fuentes, 2018). Generally, open science can consist of the transparent publication of data (when possible), as well as the publication of transparent methods for conducting the research, and the publication of open access scientific reports. Such open science research has the potential to generate new insights for all charities and nonprofits, while also increasing transparency, awareness, and control for potential donors.

It is possible to create a collaborative process for conducting this research. Such a process would ask prior charitable donors to explicitly consent to data linkages between past donations and social media profiles. The results of the research could be published publicly, and the artificial intelligence generated could be used solely to improve matching between prospective donors and potential causes. This would ultimately build a virtuous cycle of innovation and trust between donors and charities, preparing the sector for the challenges of the AI-enabled age.

References

Calix Woc, Carlos. (2020). Psychographic Profiling of Chartiable Donations Using Twitter Data and Machine Learning Techniques [Master’s Thesis]. Dalhousie University.

Conrad, C., & Kešelj, V. (2016). Predicting Political Donations Using Twitter Hashtags and Character N-Grams. 2016 IEEE 18th Conference on Business Informatics (CBI), 2, 1–7.

Enli, G., & Naper, A. A. (2016). Social Media Incumbent Advantage: Barack Obama’s and Mitt Romney’s Tweets in the 2012 U.S. Presidential Election Campaign. 9781138860766, 364–378.

Gruzd, A., & Hernández-García, Á. (2018). Privacy Concerns and Self-Disclosure in Private and Public Uses of Social Media. Cyberpsychology, Behavior, and Social Networking, 21(7), 418–428. https://doi.org/10.1089/cyber.2017.0709

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Jung, A.-R. (2017). The influence of perceived ad relevance on social media advertising: An empirical examination of a mediating role of privacy concern. Computers in Human Behavior, 70, 303–309. https://doi.org/10.1016/j.chb.2017.01.008

Vicente-Saez, R., & Martinez-Fuentes, C. (2018). Open Science now: A systematic literature review for an integrated definition. Journal of Business Research, 88, 428–436. https://doi.org/10.1016/j.jbusres.2017.12.043