Thanks, on the record

This week, I had a milestone worth sharing. It marks the next step in a long and extraordinary journey.

As the sun sets in Halifax, I like to look out my office window. When I look out, I see the McCain Building, where I started this journey in 2006, and did most of my undergrad. I am reminded of the many years of learning and events that have long come and gone and am reminded how lucky I am to teach here. In a globalized world, not many people have the privilege of working where they grew up, much less in academia.

Today, I received notice from Kim Brooks, President of Dalhousie University, stating that I have been recommended for promotion to Associate Professor and tenure, an indefinite appointment.

For academics, this marks a major hallmark and achievement. It’s an incredible privilege to be granted this honour. It’s also incredible to know that I am held in such high esteem by my colleagues, both in Halifax and in the international community. The honour further comes from the goodwill of my students and friends, who have all played a part in making this happen. Thank you!

For me though, it also marks something that might even be a bit different. In September, it will have been 17 years since I started here. Except for a brief stint at Queen’s, most of my professional life has been spent at Dalhousie, on this city block: in the McCain, the Goldberg, and the Rowe buildings. Sometimes I wonder what could have been if I had left. What could my life have been if I had taken a step down a different path?

These thoughts disappear when I am reminded why I committed to this path. Dalhousie has consistently delivered not just a good opportunity, but the best opportunity for me. I have received nothing but the utmost support from this community in pursuing seemingly strange adventures at the margins of disciplines. There is probably no other role that could so support a life in pursuit of strange research hobbies while pursuing a passion for equipping people to have life-changing education. Even more to see these things realized in others over decades.

I am extraordinarily privileged to have a job that is also a vocation, and life mission. Thank you, everyone, and I am looking forward to the decades to come!

Some reflections on collaborations in light of CHI 2023

I “grew up” in an academic discipline where people tended to work alone. A philosophy student typically writes self-authored reflective papers. While mature philosophers definitely understand the importance of collaboration, my studies led me to believe, at least early on, that someone’s greatest work is achieved alone.

As I got older, I realized how important it is to have fantastic teams and communities. The importance is more than just the synergy between people (i.e. sum that is greater than the parts). It’s also how some of humanity’s best attributes are unlocked.

Working together allows you to understand the world in new and sometimes profound ways. These understandings help us advance. Our species’ ‘secret sauce’ is not our intelligence, but how our cultures transfer new ideas and ways of working in the world. Humans generally don’t invent things by sitting around in isolation; they leverage the learnings of others both past and present.

I think this idea captures part of the spirit of communities like ACM CHI and make them great. It was an honour to attend #chi2023 in Hamburg and see first hand how teams can work together to really push the needle. It is also a fantastic privilege to have great collaborators like Anika and Aaron.

By working together, this community leverages information technologies to make fantastic new possibilities for humans. Many of CHI’s leaders are not just committed to advancing themselves, but to an ever-increasing and diverse community. This can help us all leverage our greatest strengths in the long-term.

There’s still a lot of work to do. I am thankful for collaborations and look forward to building new ones. Thank you to ACM CHI for being awesome!

A need for an honest conversation

As some of you in this network know, part-time instructors and teaching assistants are currently on strike at Dalhousie. As I reflect on the financial and practical struggles that many education workers face, my has wandered to an even larger issue with Canadian universities: their transition towards privatization.

Yesterday, I was digging into a very interesting open dataset concerning industry specializations for the regions of Nova Scotia for my undergrad data class. In this dataset, various industries are organized into categories (e.g. manufacturing, education) and are labelled according to whether they are local (i.e., service the local economy only) or traded (i.e., goods and services export). When considering just the Halifax region, there are 3 employers with over 500 employees under the category of “education” as of June 2021. I made a fun graph in Tableau which summarizes this analysis of our large employers.

While I was pleased to see our city’s two large universities (Dalhousie and Saint Mary’s) and presumably one community college (NSCC) represented in the data, I was surprised to learn that the universities were considered a “traded” industry. Aren’t they supposed to be public institutions, like schools and hospitals?

I am not an economist and cannot speak for how the government accounts for its industries. However, the fact that universities are considered a traded industry is quite telling. Our government sees our universities as an export industry, yet Canada’s large research universities are also considered public institutions, designed to administer public goods. Which are they?

A public institution that is also an export industry is an extremely strange thing. As a society, we decided that these deserve financial support due to the benefits that they give to the university stakeholders, and consequently to each other. Yet, we expect that the stakeholders of universities pay more and more out of pocket. Dalhousie has continued to steadily increase tuition, having approved 3% increase again this year, with much steeper increases for undergraduate international students.

I don’t have a big personal stake on the privatization conversation any longer. I think there are interesting conversations to be had about whether the beneficiaries of the goods should foot more of the costs, as long as we expand programs to support students who have financial needs. I also don’t necessarily think that there is an easy answer here.

However, I do think that Canadians need to have a hard conversation that we are simply not having. We need to face the fact that our universities are no longer public institutions, or at very least are not acting like them. It’s now time to figure out whether this is what we really want.

The secret of our success (with online learning)

In The Secret of Our Success anthropologist Joseph Henrich outlines his hot take on why humans rose to become the ecologically dominant species: our ability to learn from each other. For many of us steeped in cognitive science, this position is surprising. We would expect the reason to be that human beings are just plain smarter. After all, humans seem to be considerably better at cognitive tasks, such as learning and processing large amounts of information. Prominent thinkers such as Steven Pinker have advanced this view, holding that human brains have been well adapted to be versatile.

However, there is interesting evidence that humans are not particularly smarter than our closest genetic relatives. In a series of studies by Esther Hermann and her team, who compared the intelligence of apes and two year old humans, it was demonstrated that infants do not fare better than chimpanzees at most cognitive tasks (e.g. quantities, causality) nor at spatial reasoning (e.g. object permanence). However, infant humans demonstrated considerably better social cognition. Adult chimps did not fare better than juvenile chimps at these tasks, but adult humans excel at them. It follows that an infant’s capacity for social learning is a major factor in their ability to learn new things, such as how to perform at cognitive tests. Henrich argues that this phenomenon also explains the history of European castaways, whose survival rates were not best determined by their years of provisions and time to learn how to survive, but whether they were able to befriend and learn from local Indigenous communities.

If we accept Henrich’s argument, it follows that what makes humans so successful is their capacity for cultural learning. Humans excel at learning passively by picking up on subtle cues through established social connections, or through the cumulative lens and context of one’s society. Our societies shape us and allow us to transfer knowledge that would have been impossible to accumulate on one’s own across a lifetime. Philosophers such as Hans-Georg Gadamer and John McDowell had a similar way of describing this phenomenon as bildung or cultural learning and argued that it was through a process of lifelong learning and education that knowledge was possible. Culture can also extend beyond a nation and permeate to subcultures or organizations.

I think that this view has consequences for the way we understand information technologies in organizations. When we implement a technology, we often consider factors such as its design, its usability, user training, and effective change management during its implementation. However, we don’t often consider the culture of an organization and the implications it has for learning new technologies.

Online learning during Covid-19 was an interesting case study in cultural learning with respect to information technology. In a study that we conducted at Dalhousie University during the pandemic, we conducted a survey of students’ experience to understand the factors that led to satisfactory online learning. Prior to the pandemic, our university had taught primarily in-person, and few people in our organization had experience either teaching or learning using online tools. We ultimately found that students’ perception of the difficulty of learning online was a strong predictor of dissatisfaction, and that this dissatisfaction is explained partially by whether someone perceived that they had the necessary technical skills.

This leads me to ask whether our collective experience through the pandemic would have been better if our organization had already developed a culture of learning online? Internal surveys later revealed that students at Dalhousie University were largely dissatisfied with emergency response though also that the Faculty of Management fared better than most. In hindsight, our elevated satisfaction was possibly explained by our faculty’s prior experience with online learning through its blended learning programs, as well as its focus on implementation of teaching technologies, such as ERPsim.

The culture of an organization thus plays an important part in supporting its stakeholders when they develop new skills or capabilities. Moving forward, we should ensure that we push to make technological innovation a key part of our identity, not just in branding. It is imperative that we create cultures where individuals have resources (including time) for innovation, where failure is tolerated, and where teams can be rewarded for taking risks. In doing so, we could create organizations that are better equipped for radical circumstances.

Leaders should focus on building the right culture. They should also heed Drucker’s wise words: “culture eats strategy for breakfast”. This adage extends well beyond online learning; culture might just be the most important factor in building a successful organization.

Fundraising AI Forum 2021

I participated in #FUNAI2021. You can find the slides from my presentation here.

Extended Abstract

Over the past five years there has been a rise in public awareness about the efficacy of social media for targeted advertising. Most notoriously, social media was heavily leveraged in both the Obama 2012 (Gunn & Anja Anaheim, 2016) and Trump 2016 political campaigns to generate advertising advantages. In the case of the latter, illegal and unethical use of social media data by the defunct Cambridge Analytica company facilitated an unprecedented advantage by leveraging intimate personal data acquired from Facebook’s servers (Isaak & Hana, 2018). Perhaps more than any other case, this has facilitated public skepticism about targeted advertising. Yet, despite such increased public scrutiny and concerns about privacy (Gruzd & Hernández-García, 2018), evidence suggests that targeted advertising decreases advertisement avoidance, potentially decreasing advertising costs for the organizations that employ it (Jung, 2017).

Should charities follow suit? Evidence from two prior studies conducted by researchers at Dalhousie University suggest that social media can similarly be leveraged to conduct targeted advertising. In the first study, publicly accessible Twitter data was used to successfully predict political donations among 438 Twitter users with 70% accuracy (Conrad & Keselj, 2016). In the second study, Twitter data was used to predict donations to charities with 71% accuracy (Calix Woc, 2020). These results could likely be improved further, lending confidence that such data could be leveraged to conduct targeted donation asks, ultimately decreasing charities’ costs of prospecting online. This would certainly be welcome in the era of Covid-19 and our increasingly digital world. However, the use of such targeted advertising techniques could also increase privacy concerns among donors and stakeholders, even when only leveraging publicly accessible data (Jung, 2017).

It is now critically important to invest in an open science research programme on how artificial intelligence and social media can be ethically leveraged in the donor prospecting process. Open science can mean many things but has been formally described as “transparent and accessible knowledge that is shared and developed through collaborative networks” (Vicente-Saez & Martinez-Fuentes, 2018). Generally, open science can consist of the transparent publication of data (when possible), as well as the publication of transparent methods for conducting the research, and the publication of open access scientific reports. Such open science research has the potential to generate new insights for all charities and nonprofits, while also increasing transparency, awareness, and control for potential donors.

It is possible to create a collaborative process for conducting this research. Such a process would ask prior charitable donors to explicitly consent to data linkages between past donations and social media profiles. The results of the research could be published publicly, and the artificial intelligence generated could be used solely to improve matching between prospective donors and potential causes. This would ultimately build a virtuous cycle of innovation and trust between donors and charities, preparing the sector for the challenges of the AI-enabled age.

References

Calix Woc, Carlos. (2020). Psychographic Profiling of Chartiable Donations Using Twitter Data and Machine Learning Techniques [Master’s Thesis]. Dalhousie University.

Conrad, C., & Kešelj, V. (2016). Predicting Political Donations Using Twitter Hashtags and Character N-Grams. 2016 IEEE 18th Conference on Business Informatics (CBI), 2, 1–7.

Enli, G., & Naper, A. A. (2016). Social Media Incumbent Advantage: Barack Obama’s and Mitt Romney’s Tweets in the 2012 U.S. Presidential Election Campaign. 9781138860766, 364–378.

Gruzd, A., & Hernández-García, Á. (2018). Privacy Concerns and Self-Disclosure in Private and Public Uses of Social Media. Cyberpsychology, Behavior, and Social Networking, 21(7), 418–428. https://doi.org/10.1089/cyber.2017.0709

Isaak, J., & Hanna, M. J. (2018). User Data Privacy: Facebook, Cambridge Analytica, and Privacy Protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268

Jung, A.-R. (2017). The influence of perceived ad relevance on social media advertising: An empirical examination of a mediating role of privacy concern. Computers in Human Behavior, 70, 303–309. https://doi.org/10.1016/j.chb.2017.01.008

Vicente-Saez, R., & Martinez-Fuentes, C. (2018). Open Science now: A systematic literature review for an integrated definition. Journal of Business Research, 88, 428–436. https://doi.org/10.1016/j.jbusres.2017.12.043

Three tips on how to learn well during COVID-19, backed by brain science

If you are reading this, you are likely aware that most universities in Canada are transitioning to an online format for the Fall 2020 semester. This has presented many challenges for professors because teaching online is totally different! We are putting a lot of effort into learning how to teach effectively. However, in my many discussions about online learning over the past few months, we haven’t spent a lot of time taking about the challenge: how to effectively prepare students for this transition. I thought I would share three tips on how to be an effective online learner, some of which has come from my research.

Tip 1: Take micro-breaks regularly

I recently re-analyzed some of my PhD thesis data and found something interesting. In my PhD thesis, I described studies of people’s brains patterns as they attended long lecture videos. The software I created also asked them questions periodically throughout the video. When asked at the 15 minute mark, participants reported being largely on task, though at the 30 minute mark they reported a significantly higher degree of mind wandering. I also found that the degree of reported mind wandering significantly impacted how well students learned from the lecture. Long lectures are hard to learn from! This isn’t news.

What I recently discovered is that there are some brain patterns that predict mind wandering pretty accurately. The brain patterns were significantly higher levels of delta waves (which are associated with sleepiness) and alpha waves (which are associated with meditation and self-directed thoughts). Clearly, the longer you focus on a lecture video, the more likely your brain will veg out and focus on other things. Though this does not prove that taking breaks would improve learning, the pattern is clear, and disrupting this pattern may result in better learning. The picture below illustrates the degree of these waves when in states of being “completely on task” versus “completely mind wandering”.

Regular breaks every 20 minutes or so may prevent your mind from wandering. When your mind is focused, you learn better. 

Figure: Differences in delta (sleepy waves) and alpha (meditation waves) when your mind is wandering. Minds are more likely to wander as a lecture progresses.

Tip 2: Use multimedia that is most effective for you

Some people like learning from books while others love YouTube videos (I am guilty!). However, for a serious online learner, it is usually best to have a combination of tools at your disposal. Education scientist Richard Mayer spent most of his career explaining how and why this works. In Multimedia Learning he argued that “humans possess separate information-processing channels for visually represented material and auditorily represented material.” In other words it is better to learn from a combination of pictures and words is better than either pictures alone or words alone.

Our brains are organized in a series of networks that conduct and compute various sensory and processing tasks. Many of these networks have common features (e.g. audio and visual attention networks), but ultimately use different tools to do their job. If you push one network too hard, it is difficult to effectively abstract information into knowledge. However, by distributing the workload, your brain can more effectively abstract experiences and information to learn. Different people have different capacities, which may be one of the factors influencing preferred learning styles.

Sometimes however, online learning is hampered by language barriers, or perhaps by a poorly delivered lesson. When this happens, a lesson generates what John Sweller called extraneous cognitive load. Though we need a bit of challenge to effectively learn, it can seem impossible difficult material is delivered in a hard-to-understand format. When this happens, it is extra-important to find other resources which can supplement a lesson. If you are lucky, these resources would be  in a format that works for you. Fortunately, in a world of YouTube and Coursera, we have no shortage of content to choose from.

If you find it hard to learn from a video, crack open your textbook. If you can’t find answers, don’t hesitate to search for other outside resources or reach out to your teaching team for help.

Tip 3: Be intentional about socializing online

Finally, one of the great challenges presented by the COVID-19 situation in Canada is that we will not be able to hold in-person social activities. In some ways, it almost feels like we live on spaceships, alone-yet-together; what YouTube creator C.G.P. Grey called “Spaceship You”. For many students, “Spaceship You” is challenging because good learning experiences are shaped in large part by your community of learning.

E-learning scholars have noticed that “social presence” is a critical component of e-learning success. In a classic paper by Johnson et al. (2008) it was demonstrated that satisfactory e-learning environments were identified as 1) personal, 2) sociable, 3) sensitive, 4) warm and 5) active. A more recent meta-analysis on the role of social presence in online learning has supported this finding and it is clear that social presence is important for learning online.

The challenge with social presence is that though instructors play a critical part in facilitating it, students must also take initiative to create such an environment. At Dalhousie, we have plans to implement some pretty new technology to help with this (which will be announced later this summer). However, technology is only good if it is actually used. It will be critically important for both students and instructors to be actively engaged on technology platforms.

University is a social experience. Be active on your course chat and other online community platforms. 

The fall semester will certainly be different than any of the others that have come before. Though there are challenges, there may also be unique opportunities for personal and professional growth. We don’t have all of the answers, but we are working hard to create a good learning experience. I am looking forward with an open mind; with any luck, 2020 will be remembered as year of unprecedented internet innovation, not just disruption.

Technology Can Change Everything about the Business of Higher Ed (for the Better)

This post was originally featured in the Dalhousie Business Review on December 3rd, 2019. You can find the link to the original article here.

In 2012, an article in The American Interest made waves heralding the “End of the University as We Know It”. In this article, the author predicted that half of the colleges and universities in the United States would disappear within the next decade, because schools such as MIT or Harvard were poised to acquire millions of students by offering their courses for free over the internet. The technology behind this change would later be known as Massive Open Online Courses (MOOCs). With university students paying ever-increasing amounts in tuition, many predicted that no-to-low cost MOOCs would disrupt the traditional university business model.

However, as time passed it became increasingly clear that this technology would not succeed at disrupting traditional education as envisioned. The following year, a study of MOOC courses offered by The University of Pennsylvania found that fewer than five percent of MOOC participants actually finished the course they enrolled in, and more worryingly, that MOOC success was best predicted by whether a student already had a university degree. Other studies soon supported this finding and the initial optimism about MOOCs wavered. It seems that the MOOC experience is very different from the traditional university; with the benefit of hindsight in 2019, nobody is surprised by this discovery.

There are many reasons why MOOCs and other online education experiences are different from in-person classrooms. One likely factor (which backed by significant evidence) is social presence, the ability to perceive others in a learning experience. Another factor is the design of a learning experience, which needs to be designed in a way that is appropriately difficulty and in a way that inhibits mind wandering, similarly to how a good instructor would deliver their lecture.  The evidence is clear: there are parts of a learning experience where a human element is critical to success.

Yet, the underlying financial challenge to the higher education business model remains, and universities are under ever-increasing pressure to reduce costs. This often translates to pressure to increase classroom size, downsize library collections, or to simply ask professors to teach more. If we agree that the human element is the secret sauce to quality teaching, how can colleges and universities adapt to face these pressures without compromising the very thing that makes higher education work?

Many sectors have already undergone (and continue to undergo) digital transformations, which have changed nearly everything about how they work. In the 1980s and 90s, for example, Ford underwent radical digital restructuring which not only dramatically improved efficiency, but completely changed the structure of their accounts payable department and its processes. In this classic case, Ford used the gains in early internet technology to completely re-engineer how their company operates, largely by automating manual business processes. In 2019, companies, and to lesser extents public institutions, continue to transform themselves with digital technology, which allows them to do more with the same number of human resources.

Higher education professionals are often perceived to be resistant to digital transformation, and often for good reasons. For instance, in The Slow Professor Canadian professors Maggie Berg and Barbara Seeber critique the increasingly administrative nature of academic work and the merits of traditional education. They call on professors to actively resist the corporatization of academia and to force academic institutions to make time for slow, creative thought. Rather than spending time writing emails or publishing results as quickly as possible, they argue that professors (as well as other higher ed professionals) should instead take the time to cultivate scholarly value. At face value, it would seem that this sentiment resists the trend in digital transformation, which often requires workers to leverage new technologies to increase productivity, often at the expense of our work life balance.

It doesn’t have to be this way. Though Ford ultimately used their efficiency gains to reduce the number of accounts payable employees by 75%, universities (as non-profit institutions) do not have to follow suit. We have a tremendous opportunity to use digital transformation to automate administrative work and free up time for what matters.

For instance, with proper e-learning capabilities, educators could create flipped classrooms complete with pre-recorded lecture components and (partially) automated evaluation, so that faculty can spend more time guiding hands-on experiences with smaller in-person tutorials. Such a digitally enhanced classroom would automate the boring stuff and free up time to do the things that matter: providing meaningful mentorship and formative experiences for students. Digital technologies can similarly be used to reduce the number of forms that need to be filled out, efficiently document information that would otherwise be repeatedly sent in dozens of emails, or to give data-driven insights into emerging problems before they arise.

Digital transformation is not new. Colleges and universities simply need to embrace a culture where digital transformation is possible. They must also avoid the trap of providing watered-down MOOCs and instead focus their information strategy on maximizing the qualities that defined the university experience in the first place. By doing this, colleges and universities can enhance the best qualities of higher education in a time that seems faced with insurmountable challenges. The transformation would be painful, but it would also be worth it.

The practical value of basic PhD research

There is a growing awareness about the downsides of pursuing a PhD. During my time as a PhD student, I was regularly reminded about the challenges of the academic job market and about the doctorate’s poor financial returns on investment.

Though there are also contrary views about the degree’s value, these views often point to the practical research skills that PhD students develop and their relevance to industry. Today, PhD students are often encouraged to conduct applied research, so that they can easily communicate and transfer the results of their work.

One of the overlooked practical benefits of the PhD degree is that it offers students the opportunity to conduct curiosity-driven basic research. PhD students often have the option to focus on answering fundamental questions about why or how the universe is, the answers of which might not be easily applied to a particular industry. Counter-intuitively, I argue that basic research has economic benefits, and that these benefits should be considered when assessing the practical value of PhD studies.

Risk and uncertainty

My argument is rooted in the difference between risk and uncertainty, as originally articulated by economist Frank Knight. Risk concerns situations when we do not know the outcome of a situation but can measure the odds. For example, researchers might not have discovered the best artificial intelligence algorithms for predicting air quality, but can predict that a solution is possible; the odds of success are known.

Uncertainty on the other hand occurs when the odds of success are unknown. Basic research pursues questions with uncertain outcomes and unpredictable practical value. This often leads to the perception that money spent on basic research is wasted. However, the impacts of basic research are occasionally great, and paradoxically have huge practical applications.

One example was the research of Canadian-English computer scientist Geoffrey Hinton at the University of Toronto. In the 1980s, Hinton conducted research in artificial neural networks, what were then seen as curiosities with little practical value. Today, neural networks are the backbone of a new generation of artificial intelligence behind technologies ranging from Apple’s Siri to automated cancer detection systems.

A second example is Canadian physicist Donna Strickland, who recently won a Nobel Prize for her work in chirped optical pulses. Strickland, whose Nobel Prize winning work begun as a doctoral student, later reflected on how it took at least a decade for practical applications of her research to come into view.

Strategies for fostering basic research

Both Hinton and Strickland conducted basic research early in their academic career which did not yield practical applications until decades afterwards. In both cases, the future practical benefits to society were unknown and could not have been known or quantified at the time.

PhD studies offer the benefit of full-time research, which can become scarce later in a scholarly career. The PhD can be a vehicle for conducting and cultivating curiosity-driven research—a privilege that is not experienced elsewhere in society, yet has unique benefits.

The potential practical value of basic research, especially at the PhD level, should therefore be considered by research policymakers. However, policymakers may find it challenging to finance research activities when we cannot easily quantify the outcomes.

We can take steps to maximize the effectiveness of basic research by drawing on the ways entrepreneurs and angel investors deal with uncertainty. For example, policies can emphasize supporting a large number of promising or interesting PhD projects, rather than offering more financing for the best ones. This is a common approach taken by successful angel investors.

Alternatively, policymakers can attempt to maximize serendipity, which has been identified as aiding unintended discovery. Encouraging collaboration of PhD students across or between disciplines could increase the chance of discovery. Interdisciplinary PhD programs might further offer a way to encourage students to tackle some of the pressing issues of today’s world, which are often interdisciplinary in nature.

Regardless of approach, we must stop viewing the value of a PhD degree as purely intangible and recognize the economic and practical value it plays in basic research. Though not all PhD students can be expected to become Hinton or Strickland, basic research at the PhD level enabled their success. It can continue to enable future generations of researchers too, if we allow it.

Reflections about PhDs on the eve of a defence

Despite my noble intentions with respect to this blog, I have been largely unsuccessful at adding content. There are many reasons for this, such as the aggressive teaching and publishing deadlines from last semester. However, a big part of the reason for this is my looming thesis defence, due to happen tomorrow at 10:00 am. I thought it would be fitting to reflect a bit on what it means to do a PhD and what the last four years have entailed, in hindsight.

Having done more than my fair share of university degrees, I can attest to how the PhD is quite different from the others. There are clear financial reasons for pursuing Masters or professional programs. However, in many disciplines PhDs often come with few financial rewards. According to the US Census Bureau, in many fields with robust professional programs (i.e. Business or Law), median earnings among PhD holders are lower than their professional counterparts (i.e. MBA, JD). When one considers that the opportunity cost of doing a PhD 3 to 7 years of productive labour, it becomes clear that the motivation for doing a PhD is often not financial.  Realistically, the PhD only equips students to do one thing: make a substantial contribution of research to the academic community in the discipline students have decided to pursue. Though some PhDs also require students to gain teaching experience, this is not mandatory in all PhD programs.  When it is mandatory, students could expect to spend hundreds of hours teaching a course or working as a teaching assistant. This is small compared to the thousands of hours spent cultivating research.

The best analogy of a PhD that I have yet encountered was written by Tad Waddington in Lasting Contribution. His quote reads:

The last step of the [education] process is to contribute to knowledge, which is unlike the previous steps. Elementary school is like learning to ride a tricycle. High school is like learning to ride a bicycle. College is like learning to drive a car. A master’s degree is like learning to drive a race car. Students often think that the next step is more of the same, like learning to fly an airplane. On the contrary, the Ph.D. is like learning to design a new car. Instead of taking in more knowledge, you have to create knowledge. You have to discover (and then share with others) something that nobody has ever known before.

When I first read this quote three years ago, it stuck with me. I had the good fortune of having pursued two master’s degrees before starting my PhD and had originally thought that the PhD would be like a more advanced version of the previous two. Looking back, I don’t believe that was the case, and agree with Waddington more than ever. If you are considering ever doing a PhD, I recommend that you should be the sort of person who enjoys spending a ridiculous amount of time and energy to make a difficult, small, yet very real contribution to human knowledge.

It’s been a pleasure to have the opportunity to pursue and cultivate interdisciplinary research which I feel truly does break down barriers between disciplines. It has been challenging and at times grueling, but also rewarding, and I believe I have come out a better person. I would like to thank everyone for supporting me through the journey. I wouldn’t do it any differently if I could do it all again.

Why MOOCs are bad and what we can do about it

Perna, L., Ruby, A., Boruch, R., Wang, N., Scull, J., Evans, C., & Ahmad, S. (2013, December). The life cycle of a million MOOC users. In MOOC Research Initiative Conference (pp. 5-6).

In 2011, Sebastian Thurn and David Evans had the bright idea of recording their Stanford University lectures and posting them on the internet. The initiative was incredibly successful and hundreds of thousands of students flocked over the broadband highways to learn about artificial intelligence from two of the greatest minds in the field. In fact, their original initiative was so successful that Thurn later left his job to fund Udacity, today one of the most innovative and disruptive forces in university education. By 2012, it seemed that the whole world was talking about Massive Open Online Courses (“MOOCs”) and that MOOCs were on the path to transforming how teaching was done forever by providing the highest quality education to everyone, everywhere, for free.

This story has a deeply personal element for me. Then an unemployed (or at times underemployed) philosophy grad, I had to make a decision about whether to go back to school to pursue yet another graduate degree, this time in computer science. I sometimes wonder what my life would have been like if I had decided to take a year off and gorge myself on an intellectual mash of Stanford videos on machine learning. I usually conclude that it would have been for the worse. I am very thankful that I ended up going back to school because I probably learned a lot more than I would have otherwise. In late 2013 and early 2014, a number of quantitative studies were published that were like a wet blanket over silicon valley’s burning fire for MOOCs. Researchers at Penn, for instance, found that as few as 5% of MOOC registrants actually finish their courses, while only a fraction of those attain high grades. What’s worse is that MOOC users were found to disproportionately come from educated, male and wealthy backgrounds, largely in the USA. So much, then, for the fad that was the MOOC revolution. Or so the story goes.

Why do MOOCs suck so much at teaching the people that they are trying to help? One of the many reasons is that they are not well-designed. Robert Ubell from NYU has been doing e-learning a long time and thinks that MOOCs suck because they were not designed to keep users engaged, like a good teacher would. Ubell points to active learning, a theory that getting students deeply involved in the learning process will produce better outcomes. For example, active learning holds that asking students questions during a lecture would produce better results because students are more deeply engaged in the process. By involving students, we can better keep their attention, which is one of the fundamental brain mechanisms governing learning. MOOCs suck at knowing when you are paying attention. Good teachers know this by the glazed look in their students eyes as their attention drifts into the mental netherworld between the classroom and PewDiePie’s latest embarrassment.

If we had a good way to measure attention, we would have a way of improving MOOCs. The problem is that scientists do not yet have reliable ways of measuring attention through a computer. Sure we can look at clicks or scrolls, but do clicks really tell you much when you are rewarded by faking it? Alternatively, we could ask you whether you are paying attention, but this will disrupt the course experience. This is why I am looking at brain data. If we watch people’s brains, we can reliably understand when they stop paying attention, and maybe build MOOCs that teach better. It’s a bold idea, but if it works, we could develop technologies that achieve this original vision: quality education for everyone, everywhere, for free.