Fundraising AI Forum 2021

I participated in #FUNAI2021. You can find the slides from my presentation here.

Extended Abstract

Over the past five years there has been a rise in public awareness about the efficacy of social media for targeted advertising. Most notoriously, social media was heavily leveraged in both the Obama 2012 (Gunn & Anja Anaheim, 2016) and Trump 2016 political campaigns to generate advertising advantages. In the case of the latter, illegal and unethical use of social media data by the defunct Cambridge Analytica company facilitated an unprecedented advantage by leveraging intimate personal data acquired from Facebook’s servers (Isaak & Hana, 2018). Perhaps more than any other case, this has facilitated public skepticism about targeted advertising. Yet, despite such increased public scrutiny and concerns about privacy (Gruzd & Hernández-García, 2018), evidence suggests that targeted advertising decreases advertisement avoidance, potentially decreasing advertising costs for the organizations that employ it (Jung, 2017).

Should charities follow suit? Evidence from two prior studies conducted by researchers at Dalhousie University suggest that social media can similarly be leveraged to conduct targeted advertising. In the first study, publicly accessible Twitter data was used to successfully predict political donations among 438 Twitter users with 70% accuracy (Conrad & Keselj, 2016). In the second study, Twitter data was used to predict donations to charities with 71% accuracy (Calix Woc, 2020). These results could likely be improved further, lending confidence that such data could be leveraged to conduct targeted donation asks, ultimately decreasing charities’ costs of prospecting online. This would certainly be welcome in the era of Covid-19 and our increasingly digital world. However, the use of such targeted advertising techniques could also increase privacy concerns among donors and stakeholders, even when only leveraging publicly accessible data (Jung, 2017).

It is now critically important to invest in an open science research programme on how artificial intelligence and social media can be ethically leveraged in the donor prospecting process. Open science can mean many things but has been formally described as “transparent and accessible knowledge that is shared and developed through collaborative networks” (Vicente-Saez & Martinez-Fuentes, 2018). Generally, open science can consist of the transparent publication of data (when possible), as well as the publication of transparent methods for conducting the research, and the publication of open access scientific reports. Such open science research has the potential to generate new insights for all charities and nonprofits, while also increasing transparency, awareness, and control for potential donors.

It is possible to create a collaborative process for conducting this research. Such a process would ask prior charitable donors to explicitly consent to data linkages between past donations and social media profiles. The results of the research could be published publicly, and the artificial intelligence generated could be used solely to improve matching between prospective donors and potential causes. This would ultimately build a virtuous cycle of innovation and trust between donors and charities, preparing the sector for the challenges of the AI-enabled age.


Calix Woc, Carlos. (2020). Psychographic Profiling of Chartiable Donations Using Twitter Data and Machine Learning Techniques [Master’s Thesis]. Dalhousie University.

Conrad, C., & Kešelj, V. (2016). Predicting Political Donations Using Twitter Hashtags and Character N-Grams. 2016 IEEE 18th Conference on Business Informatics (CBI), 2, 1–7.

Enli, G., & Naper, A. A. (2016). Social Media Incumbent Advantage: Barack Obama’s and Mitt Romney’s Tweets in the 2012 U.S. Presidential Election Campaign. 9781138860766, 364–378.

Gruzd, A., & Hernández-García, Á. (2018). Privacy Concerns and Self-Disclosure in Private and Public Uses of Social Media. Cyberpsychology, Behavior, and Social Networking, 21(7), 418–428.

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59.

Jung, A.-R. (2017). The influence of perceived ad relevance on social media advertising: An empirical examination of a mediating role of privacy concern. Computers in Human Behavior, 70, 303–309.

Vicente-Saez, R., & Martinez-Fuentes, C. (2018). Open Science now: A systematic literature review for an integrated definition. Journal of Business Research, 88, 428–436.

Three tips on how to learn well during COVID-19, backed by brain science

If you are reading this, you are likely aware that most universities in Canada are transitioning to an online format for the Fall 2020 semester. This has presented many challenges for professors because teaching online is totally different! We are putting a lot of effort into learning how to teach effectively. However, in my many discussions about online learning over the past few months, we haven’t spent a lot of time taking about the challenge: how to effectively prepare students for this transition. I thought I would share three tips on how to be an effective online learner, some of which has come from my research.

Tip 1: Take micro-breaks regularly

I recently re-analyzed some of my PhD thesis data and found something interesting. In my PhD thesis, I described studies of people’s brains patterns as they attended long lecture videos. The software I created also asked them questions periodically throughout the video. When asked at the 15 minute mark, participants reported being largely on task, though at the 30 minute mark they reported a significantly higher degree of mind wandering. I also found that the degree of reported mind wandering significantly impacted how well students learned from the lecture. Long lectures are hard to learn from! This isn’t news.

What I recently discovered is that there are some brain patterns that predict mind wandering pretty accurately. The brain patterns were significantly higher levels of delta waves (which are associated with sleepiness) and alpha waves (which are associated with meditation and self-directed thoughts). Clearly, the longer you focus on a lecture video, the more likely your brain will veg out and focus on other things. Though this does not prove that taking breaks would improve learning, the pattern is clear, and disrupting this pattern may result in better learning. The picture below illustrates the degree of these waves when in states of being “completely on task” versus “completely mind wandering”.

Regular breaks every 20 minutes or so may prevent your mind from wandering. When your mind is focused, you learn better. 

Figure: Differences in delta (sleepy waves) and alpha (meditation waves) when your mind is wandering. Minds are more likely to wander as a lecture progresses.

Tip 2: Use multimedia that is most effective for you

Some people like learning from books while others love YouTube videos (I am guilty!). However, for a serious online learner, it is usually best to have a combination of tools at your disposal. Education scientist Richard Mayer spent most of his career explaining how and why this works. In Multimedia Learning he argued that “humans possess separate information-processing channels for visually represented material and auditorily represented material.” In other words it is better to learn from a combination of pictures and words is better than either pictures alone or words alone.

Our brains are organized in a series of networks that conduct and compute various sensory and processing tasks. Many of these networks have common features (e.g. audio and visual attention networks), but ultimately use different tools to do their job. If you push one network too hard, it is difficult to effectively abstract information into knowledge. However, by distributing the workload, your brain can more effectively abstract experiences and information to learn. Different people have different capacities, which may be one of the factors influencing preferred learning styles.

Sometimes however, online learning is hampered by language barriers, or perhaps by a poorly delivered lesson. When this happens, a lesson generates what John Sweller called extraneous cognitive load. Though we need a bit of challenge to effectively learn, it can seem impossible difficult material is delivered in a hard-to-understand format. When this happens, it is extra-important to find other resources which can supplement a lesson. If you are lucky, these resources would be  in a format that works for you. Fortunately, in a world of YouTube and Coursera, we have no shortage of content to choose from.

If you find it hard to learn from a video, crack open your textbook. If you can’t find answers, don’t hesitate to search for other outside resources or reach out to your teaching team for help.

Tip 3: Be intentional about socializing online

Finally, one of the great challenges presented by the COVID-19 situation in Canada is that we will not be able to hold in-person social activities. In some ways, it almost feels like we live on spaceships, alone-yet-together; what YouTube creator C.G.P. Grey called “Spaceship You”. For many students, “Spaceship You” is challenging because good learning experiences are shaped in large part by your community of learning.

E-learning scholars have noticed that “social presence” is a critical component of e-learning success. In a classic paper by Johnson et al. (2008) it was demonstrated that satisfactory e-learning environments were identified as 1) personal, 2) sociable, 3) sensitive, 4) warm and 5) active. A more recent meta-analysis on the role of social presence in online learning has supported this finding and it is clear that social presence is important for learning online.

The challenge with social presence is that though instructors play a critical part in facilitating it, students must also take initiative to create such an environment. At Dalhousie, we have plans to implement some pretty new technology to help with this (which will be announced later this summer). However, technology is only good if it is actually used. It will be critically important for both students and instructors to be actively engaged on technology platforms.

University is a social experience. Be active on your course chat and other online community platforms. 

The fall semester will certainly be different than any of the others that have come before. Though there are challenges, there may also be unique opportunities for personal and professional growth. We don’t have all of the answers, but we are working hard to create a good learning experience. I am looking forward with an open mind; with any luck, 2020 will be remembered as year of unprecedented internet innovation, not just disruption.

Technology Can Change Everything about the Business of Higher Ed (for the Better)

This post was originally featured in the Dalhousie Business Review on December 3rd, 2019. You can find the link to the original article here.

In 2012, an article in The American Interest made waves heralding the “End of the University as We Know It”. In this article, the author predicted that half of the colleges and universities in the United States would disappear within the next decade, because schools such as MIT or Harvard were poised to acquire millions of students by offering their courses for free over the internet. The technology behind this change would later be known as Massive Open Online Courses (MOOCs). With university students paying ever-increasing amounts in tuition, many predicted that no-to-low cost MOOCs would disrupt the traditional university business model.

However, as time passed it became increasingly clear that this technology would not succeed at disrupting traditional education as envisioned. The following year, a study of MOOC courses offered by The University of Pennsylvania found that fewer than five percent of MOOC participants actually finished the course they enrolled in, and more worryingly, that MOOC success was best predicted by whether a student already had a university degree. Other studies soon supported this finding and the initial optimism about MOOCs wavered. It seems that the MOOC experience is very different from the traditional university; with the benefit of hindsight in 2019, nobody is surprised by this discovery.

There are many reasons why MOOCs and other online education experiences are different from in-person classrooms. One likely factor (which backed by significant evidence) is social presence, the ability to perceive others in a learning experience. Another factor is the design of a learning experience, which needs to be designed in a way that is appropriately difficulty and in a way that inhibits mind wandering, similarly to how a good instructor would deliver their lecture.  The evidence is clear: there are parts of a learning experience where a human element is critical to success.

Yet, the underlying financial challenge to the higher education business model remains, and universities are under ever-increasing pressure to reduce costs. This often translates to pressure to increase classroom size, downsize library collections, or to simply ask professors to teach more. If we agree that the human element is the secret sauce to quality teaching, how can colleges and universities adapt to face these pressures without compromising the very thing that makes higher education work?

Many sectors have already undergone (and continue to undergo) digital transformations, which have changed nearly everything about how they work. In the 1980s and 90s, for example, Ford underwent radical digital restructuring which not only dramatically improved efficiency, but completely changed the structure of their accounts payable department and its processes. In this classic case, Ford used the gains in early internet technology to completely re-engineer how their company operates, largely by automating manual business processes. In 2019, companies, and to lesser extents public institutions, continue to transform themselves with digital technology, which allows them to do more with the same number of human resources.

Higher education professionals are often perceived to be resistant to digital transformation, and often for good reasons. For instance, in The Slow Professor Canadian professors Maggie Berg and Barbara Seeber critique the increasingly administrative nature of academic work and the merits of traditional education. They call on professors to actively resist the corporatization of academia and to force academic institutions to make time for slow, creative thought. Rather than spending time writing emails or publishing results as quickly as possible, they argue that professors (as well as other higher ed professionals) should instead take the time to cultivate scholarly value. At face value, it would seem that this sentiment resists the trend in digital transformation, which often requires workers to leverage new technologies to increase productivity, often at the expense of our work life balance.

It doesn’t have to be this way. Though Ford ultimately used their efficiency gains to reduce the number of accounts payable employees by 75%, universities (as non-profit institutions) do not have to follow suit. We have a tremendous opportunity to use digital transformation to automate administrative work and free up time for what matters.

For instance, with proper e-learning capabilities, educators could create flipped classrooms complete with pre-recorded lecture components and (partially) automated evaluation, so that faculty can spend more time guiding hands-on experiences with smaller in-person tutorials. Such a digitally enhanced classroom would automate the boring stuff and free up time to do the things that matter: providing meaningful mentorship and formative experiences for students. Digital technologies can similarly be used to reduce the number of forms that need to be filled out, efficiently document information that would otherwise be repeatedly sent in dozens of emails, or to give data-driven insights into emerging problems before they arise.

Digital transformation is not new. Colleges and universities simply need to embrace a culture where digital transformation is possible. They must also avoid the trap of providing watered-down MOOCs and instead focus their information strategy on maximizing the qualities that defined the university experience in the first place. By doing this, colleges and universities can enhance the best qualities of higher education in a time that seems faced with insurmountable challenges. The transformation would be painful, but it would also be worth it.

The practical value of basic PhD research

There is a growing awareness about the downsides of pursuing a PhD. During my time as a PhD student, I was regularly reminded about the challenges of the academic job market and about the doctorate’s poor financial returns on investment.

Though there are also contrary views about the degree’s value, these views often point to the practical research skills that PhD students develop and their relevance to industry. Today, PhD students are often encouraged to conduct applied research, so that they can easily communicate and transfer the results of their work.

One of the overlooked practical benefits of the PhD degree is that it offers students the opportunity to conduct curiosity-driven basic research. PhD students often have the option to focus on answering fundamental questions about why or how the universe is, the answers of which might not be easily applied to a particular industry. Counter-intuitively, I argue that basic research has economic benefits, and that these benefits should be considered when assessing the practical value of PhD studies.

Risk and uncertainty

My argument is rooted in the difference between risk and uncertainty, as originally articulated by economist Frank Knight. Risk concerns situations when we do not know the outcome of a situation but can measure the odds. For example, researchers might not have discovered the best artificial intelligence algorithms for predicting air quality, but can predict that a solution is possible; the odds of success are known.

Uncertainty on the other hand occurs when the odds of success are unknown. Basic research pursues questions with uncertain outcomes and unpredictable practical value. This often leads to the perception that money spent on basic research is wasted. However, the impacts of basic research are occasionally great, and paradoxically have huge practical applications.

One example was the research of Canadian-English computer scientist Geoffrey Hinton at the University of Toronto. In the 1980s, Hinton conducted research in artificial neural networks, what were then seen as curiosities with little practical value. Today, neural networks are the backbone of a new generation of artificial intelligence behind technologies ranging from Apple’s Siri to automated cancer detection systems.

A second example is Canadian physicist Donna Strickland, who recently won a Nobel Prize for her work in chirped optical pulses. Strickland, whose Nobel Prize winning work begun as a doctoral student, later reflected on how it took at least a decade for practical applications of her research to come into view.

Strategies for fostering basic research

Both Hinton and Strickland conducted basic research early in their academic career which did not yield practical applications until decades afterwards. In both cases, the future practical benefits to society were unknown and could not have been known or quantified at the time.

PhD studies offer the benefit of full-time research, which can become scarce later in a scholarly career. The PhD can be a vehicle for conducting and cultivating curiosity-driven research—a privilege that is not experienced elsewhere in society, yet has unique benefits.

The potential practical value of basic research, especially at the PhD level, should therefore be considered by research policymakers. However, policymakers may find it challenging to finance research activities when we cannot easily quantify the outcomes.

We can take steps to maximize the effectiveness of basic research by drawing on the ways entrepreneurs and angel investors deal with uncertainty. For example, policies can emphasize supporting a large number of promising or interesting PhD projects, rather than offering more financing for the best ones. This is a common approach taken by successful angel investors.

Alternatively, policymakers can attempt to maximize serendipity, which has been identified as aiding unintended discovery. Encouraging collaboration of PhD students across or between disciplines could increase the chance of discovery. Interdisciplinary PhD programs might further offer a way to encourage students to tackle some of the pressing issues of today’s world, which are often interdisciplinary in nature.

Regardless of approach, we must stop viewing the value of a PhD degree as purely intangible and recognize the economic and practical value it plays in basic research. Though not all PhD students can be expected to become Hinton or Strickland, basic research at the PhD level enabled their success. It can continue to enable future generations of researchers too, if we allow it.

Why cognitive enhancement technologies do not need new regulation

After digging through old files from my PhD, I ran into a copy of a class presentation I worked on with Sarah Macleod and Mohammad Habibnezhad on cognitive enhancement technologies. Examples of cognitive enhancement technologies could be cognitive enhancing drugs such as Ritalin, or even whatever the heck Elon Musk is working on at Neuralink; what they have in common is that they make us better at thinking or learning. One of the potential future applications of my thesis work is the development of computer-based education that adapts to users attention. Such technologies may one day deliver education radically better than either our current MOOCs or even in-person lectures, and is therefore an example of a cognitive enhancement technology.

The presentation that we gave concerned whether cognitive enhancement technologies should be strictly regulated. We argued that cognitive enhancement technologies do not require additional regulation. Our argument was best summarized by the following chart:

Cognitive enhancement technologies can seem scary, but there already already existing frameworks to understand them. In Canada, the Food and Drugs Act governs all food, drugs, or medical devices, including any devices that modify or correct the body structure of humans. Both cognitive enhancing drugs and devices which modify the functioning of the human brain (as in Musk’s case) would therefore be governed by the Food and Drugs Act. This regulatory regime is fundamentally designed to ensure the safety of such technologies, or failing that offer sufficiently great rewards for the expected risks. If such high risk technologies are unable to offer great benefits to their users, they are likely to be banned.

I find the lower half of the matrix more interesting. When technologies are low risk to humans, we can envision them as either goods or rights. For example, there is pretty good evidence that coffee is a cognitive enhancer. There is some evidence that caffeine provides benefits to learning and memory, and even better evidence that it improves reaction time. However, the benefits of caffeine are slight. In market economies, we normally think of these sorts of things as ‘goods,’ insofar as they satisfy a consumer’s want. The main benefit of coffee is that it satisfies my desire for coffee; potential cognitive enhancement is secondary.

Sometimes however, goods can be  low risk and offer high advantages, so much so that not having them will disadvantage a person’s capabilities to be a functioning member of society. For example, access to primary and secondary education is categorically different from access to coffee. People who do not have access to primary and secondary education are severely limited in their ability to take part in society. Children who have access to education can learn skills required to participate in the economy or polity, or potentially choose to pursue tertiary education of their choosing. Those who do not have access are severely limited in their capabilities and agency and will never be able to choose how to contribute to society. This idea is better summarized by Amartya Sen and Martha Naussbaum, and is often called the capability approach to rights.

I believe that low risk, high benefit cognitive enhancement technologies may fall into this category of ‘rights’ if they truly offer large political or economic advantages to their consumers. If we imagine a radically better way to teach students using learning technology, students who used such hypothetical technology would have significantly greater capabilities in society. They could potentially learn in a fraction of a time, becoming much more productive and potentially much more competitive. People who do not have access to this hypothetical technology would likewise be severely limited in their capabilities. We would therefore consider universal access to such cognitive enhancing technologies, at least if they become adopted at a large scale. These days, there are even business models that incentivize both innovation and free access to such high benefit technologies. Such business models might serve as a starting point for such technologies as they eventually make their way to open access.

Long story short, this is why I believe that cognitive enhancing technologies do not need additional regulation. If they are high risk technologies, we have existing regulation that covers them. If they are low risk, they are either goods or rights depending on their benefits; we have exiting methods of distributing these. Black Mirror will have to wait on this one.

Reflections about PhDs on the eve of a defence

Despite my noble intentions with respect to this blog, I have been largely unsuccessful at adding content. There are many reasons for this, such as the aggressive teaching and publishing deadlines from last semester. However, a big part of the reason for this is my looming thesis defence, due to happen tomorrow at 10:00 am. I thought it would be fitting to reflect a bit on what it means to do a PhD and what the last four years have entailed, in hindsight.

Having done more than my fair share of university degrees, I can attest to how the PhD is quite different from the others. There are clear financial reasons for pursuing Masters or professional programs. However, in many disciplines PhDs often come with few financial rewards. According to the US Census Bureau, in many fields with robust professional programs (i.e. Business or Law), median earnings among PhD holders are lower than their professional counterparts (i.e. MBA, JD). When one considers that the opportunity cost of doing a PhD 3 to 7 years of productive labour, it becomes clear that the motivation for doing a PhD is often not financial.  Realistically, the PhD only equips students to do one thing: make a substantial contribution of research to the academic community in the discipline students have decided to pursue. Though some PhDs also require students to gain teaching experience, this is not mandatory in all PhD programs.  When it is mandatory, students could expect to spend hundreds of hours teaching a course or working as a teaching assistant. This is small compared to the thousands of hours spent cultivating research.

The best analogy of a PhD that I have yet encountered was written by Tad Waddington in Lasting Contribution. His quote reads:

The last step of the [education] process is to contribute to knowledge, which is unlike the previous steps. Elementary school is like learning to ride a tricycle. High school is like learning to ride a bicycle. College is like learning to drive a car. A master’s degree is like learning to drive a race car. Students often think that the next step is more of the same, like learning to fly an airplane. On the contrary, the Ph.D. is like learning to design a new car. Instead of taking in more knowledge, you have to create knowledge. You have to discover (and then share with others) something that nobody has ever known before.

When I first read this quote three years ago, it stuck with me. I had the good fortune of having pursued two master’s degrees before starting my PhD and had originally thought that the PhD would be like a more advanced version of the previous two. Looking back, I don’t believe that was the case, and agree with Waddington more than ever. If you are considering ever doing a PhD, I recommend that you should be the sort of person who enjoys spending a ridiculous amount of time and energy to make a difficult, small, yet very real contribution to human knowledge.

It’s been a pleasure to have the opportunity to pursue and cultivate interdisciplinary research which I feel truly does break down barriers between disciplines. It has been challenging and at times grueling, but also rewarding, and I believe I have come out a better person. I would like to thank everyone for supporting me through the journey. I wouldn’t do it any differently if I could do it all again.

An Interactive Demonstration of iframe

One of the courses I teach is called Information Management Systems, in Dalhousie’s Master of Library and Information Studies program. As part of this week’s lab, we focused on a Microsoft 365 Cloud Applications to explore cloud applications and web services. Cloud applications are one of the many technologies that make digital workplaces possible and we will explore two new Microsoft applications that are designed to change work processes. The first is Sway, an application designed to allow users to make high quality, speedy and interactive presentation content which is shared broadly on the internet. This post demonstrates how Sway works by sharing the presentation through an iframe, one of the HTML5 features that supports interactive media content. Feel free to explore the sample presentation below. Note: none of this content is mine, it is all curated by the Nova Scotia Archive and is borrowed strictly for demonstration and learning purposes. You can find the original source of the content here.

Why MOOCs are bad and what we can do about it

Perna, L., Ruby, A., Boruch, R., Wang, N., Scull, J., Evans, C., & Ahmad, S. (2013, December). The life cycle of a million MOOC users. In MOOC Research Initiative Conference (pp. 5-6).

In 2011, Sebastian Thurn and David Evans had the bright idea of recording their Stanford University lectures and posting them on the internet. The initiative was incredibly successful and hundreds of thousands of students flocked over the broadband highways to learn about artificial intelligence from two of the greatest minds in the field. In fact, their original initiative was so successful that Thurn later left his job to fund Udacity, today one of the most innovative and disruptive forces in university education. By 2012, it seemed that the whole world was talking about Massive Open Online Courses (“MOOCs”) and that MOOCs were on the path to transforming how teaching was done forever by providing the highest quality education to everyone, everywhere, for free.

This story has a deeply personal element for me. Then an unemployed (or at times underemployed) philosophy grad, I had to make a decision about whether to go back to school to pursue yet another graduate degree, this time in computer science. I sometimes wonder what my life would have been like if I had decided to take a year off and gorge myself on an intellectual mash of Stanford videos on machine learning. I usually conclude that it would have been for the worse. I am very thankful that I ended up going back to school because I probably learned a lot more than I would have otherwise. In late 2013 and early 2014, a number of quantitative studies were published that were like a wet blanket over silicon valley’s burning fire for MOOCs. Researchers at Penn, for instance, found that as few as 5% of MOOC registrants actually finish their courses, while only a fraction of those attain high grades. What’s worse is that MOOC users were found to disproportionately come from educated, male and wealthy backgrounds, largely in the USA. So much, then, for the fad that was the MOOC revolution. Or so the story goes.

Why do MOOCs suck so much at teaching the people that they are trying to help? One of the many reasons is that they are not well-designed. Robert Ubell from NYU has been doing e-learning a long time and thinks that MOOCs suck because they were not designed to keep users engaged, like a good teacher would. Ubell points to active learning, a theory that getting students deeply involved in the learning process will produce better outcomes. For example, active learning holds that asking students questions during a lecture would produce better results because students are more deeply engaged in the process. By involving students, we can better keep their attention, which is one of the fundamental brain mechanisms governing learning. MOOCs suck at knowing when you are paying attention. Good teachers know this by the glazed look in their students eyes as their attention drifts into the mental netherworld between the classroom and PewDiePie’s latest embarrassment.

If we had a good way to measure attention, we would have a way of improving MOOCs. The problem is that scientists do not yet have reliable ways of measuring attention through a computer. Sure we can look at clicks or scrolls, but do clicks really tell you much when you are rewarded by faking it? Alternatively, we could ask you whether you are paying attention, but this will disrupt the course experience. This is why I am looking at brain data. If we watch people’s brains, we can reliably understand when they stop paying attention, and maybe build MOOCs that teach better. It’s a bold idea, but if it works, we could develop technologies that achieve this original vision: quality education for everyone, everywhere, for free.

Getting started with WordPress

Some people think that inaugural posts are really important. I have wanted to start a blog for many years, but have always been hesitant. I think it  is because at the best of times I worry about producing high quality work and for a long time suffered with crippling writers block. I think that a lot of graduate students can relate. Scholars are notorious for overthinking things.

Over time, I have learned to manage writers block by designing an environment that forces me to do things. In the case of this blog I committed to preparing “how-to” materials for the Dalhousie University Public Scholars Program. The final result of this work is in a series of  videos on getting started with WordPress, which feature this website. This series is designed to be specific to some of the unique needs of scholars who want to start writing their own blog.