Three tips on how to learn well during COVID-19, backed by brain science

If you are reading this, you are likely aware that most universities in Canada are transitioning to an online format for the Fall 2020 semester. This has presented many challenges for professors because teaching online is totally different! We are putting a lot of effort into learning how to teach effectively. However, in my many discussions about online learning over the past few months, we haven’t spent a lot of time taking about the challenge: how to effectively prepare students for this transition. I thought I would share three tips on how to be an effective online learner, some of which has come from my research.

Tip 1: Take micro-breaks regularly

I recently re-analyzed some of my PhD thesis data and found something interesting. In my PhD thesis, I described studies of people’s brains patterns as they attended long lecture videos. The software I created also asked them questions periodically throughout the video. When asked at the 15 minute mark, participants reported being largely on task, though at the 30 minute mark they reported a significantly higher degree of mind wandering. I also found that the degree of reported mind wandering significantly impacted how well students learned from the lecture. Long lectures are hard to learn from! This isn’t news.

What I recently discovered is that there are some brain patterns that predict mind wandering pretty accurately. The brain patterns were significantly higher levels of delta waves (which are associated with sleepiness) and alpha waves (which are associated with meditation and self-directed thoughts). Clearly, the longer you focus on a lecture video, the more likely your brain will veg out and focus on other things. Though this does not prove that taking breaks would improve learning, the pattern is clear, and disrupting this pattern may result in better learning. The picture below illustrates the degree of these waves when in states of being “completely on task” versus “completely mind wandering”.

Regular breaks every 20 minutes or so may prevent your mind from wandering. When your mind is focused, you learn better. 

Figure: Differences in delta (sleepy waves) and alpha (meditation waves) when your mind is wandering. Minds are more likely to wander as a lecture progresses.

Tip 2: Use multimedia that is most effective for you

Some people like learning from books while others love YouTube videos (I am guilty!). However, for a serious online learner, it is usually best to have a combination of tools at your disposal. Education scientist Richard Mayer spent most of his career explaining how and why this works. In Multimedia Learning he argued that “humans possess separate information-processing channels for visually represented material and auditorily represented material.” In other words it is better to learn from a combination of pictures and words is better than either pictures alone or words alone.

Our brains are organized in a series of networks that conduct and compute various sensory and processing tasks. Many of these networks have common features (e.g. audio and visual attention networks), but ultimately use different tools to do their job. If you push one network too hard, it is difficult to effectively abstract information into knowledge. However, by distributing the workload, your brain can more effectively abstract experiences and information to learn. Different people have different capacities, which may be one of the factors influencing preferred learning styles.

Sometimes however, online learning is hampered by language barriers, or perhaps by a poorly delivered lesson. When this happens, a lesson generates what John Sweller called extraneous cognitive load. Though we need a bit of challenge to effectively learn, it can seem impossible difficult material is delivered in a hard-to-understand format. When this happens, it is extra-important to find other resources which can supplement a lesson. If you are lucky, these resources would be  in a format that works for you. Fortunately, in a world of YouTube and Coursera, we have no shortage of content to choose from.

If you find it hard to learn from a video, crack open your textbook. If you can’t find answers, don’t hesitate to search for other outside resources or reach out to your teaching team for help.

Tip 3: Be intentional about socializing online

Finally, one of the great challenges presented by the COVID-19 situation in Canada is that we will not be able to hold in-person social activities. In some ways, it almost feels like we live on spaceships, alone-yet-together; what YouTube creator C.G.P. Grey called “Spaceship You”. For many students, “Spaceship You” is challenging because good learning experiences are shaped in large part by your community of learning.

E-learning scholars have noticed that “social presence” is a critical component of e-learning success. In a classic paper by Johnson et al. (2008) it was demonstrated that satisfactory e-learning environments were identified as 1) personal, 2) sociable, 3) sensitive, 4) warm and 5) active. A more recent meta-analysis on the role of social presence in online learning has supported this finding and it is clear that social presence is important for learning online.

The challenge with social presence is that though instructors play a critical part in facilitating it, students must also take initiative to create such an environment. At Dalhousie, we have plans to implement some pretty new technology to help with this (which will be announced later this summer). However, technology is only good if it is actually used. It will be critically important for both students and instructors to be actively engaged on technology platforms.

University is a social experience. Be active on your course chat and other online community platforms. 

The fall semester will certainly be different than any of the others that have come before. Though there are challenges, there may also be unique opportunities for personal and professional growth. We don’t have all of the answers, but we are working hard to create a good learning experience. I am looking forward with an open mind; with any luck, 2020 will be remembered as year of unprecedented internet innovation, not just disruption.

Technology Can Change Everything about the Business of Higher Ed (for the Better)

This post was originally featured in the Dalhousie Business Review on December 3rd, 2019. You can find the link to the original article here.

In 2012, an article in The American Interest made waves heralding the “End of the University as We Know It”. In this article, the author predicted that half of the colleges and universities in the United States would disappear within the next decade, because schools such as MIT or Harvard were poised to acquire millions of students by offering their courses for free over the internet. The technology behind this change would later be known as Massive Open Online Courses (MOOCs). With university students paying ever-increasing amounts in tuition, many predicted that no-to-low cost MOOCs would disrupt the traditional university business model.

However, as time passed it became increasingly clear that this technology would not succeed at disrupting traditional education as envisioned. The following year, a study of MOOC courses offered by The University of Pennsylvania found that fewer than five percent of MOOC participants actually finished the course they enrolled in, and more worryingly, that MOOC success was best predicted by whether a student already had a university degree. Other studies soon supported this finding and the initial optimism about MOOCs wavered. It seems that the MOOC experience is very different from the traditional university; with the benefit of hindsight in 2019, nobody is surprised by this discovery.

There are many reasons why MOOCs and other online education experiences are different from in-person classrooms. One likely factor (which backed by significant evidence) is social presence, the ability to perceive others in a learning experience. Another factor is the design of a learning experience, which needs to be designed in a way that is appropriately difficulty and in a way that inhibits mind wandering, similarly to how a good instructor would deliver their lecture.  The evidence is clear: there are parts of a learning experience where a human element is critical to success.

Yet, the underlying financial challenge to the higher education business model remains, and universities are under ever-increasing pressure to reduce costs. This often translates to pressure to increase classroom size, downsize library collections, or to simply ask professors to teach more. If we agree that the human element is the secret sauce to quality teaching, how can colleges and universities adapt to face these pressures without compromising the very thing that makes higher education work?

Many sectors have already undergone (and continue to undergo) digital transformations, which have changed nearly everything about how they work. In the 1980s and 90s, for example, Ford underwent radical digital restructuring which not only dramatically improved efficiency, but completely changed the structure of their accounts payable department and its processes. In this classic case, Ford used the gains in early internet technology to completely re-engineer how their company operates, largely by automating manual business processes. In 2019, companies, and to lesser extents public institutions, continue to transform themselves with digital technology, which allows them to do more with the same number of human resources.

Higher education professionals are often perceived to be resistant to digital transformation, and often for good reasons. For instance, in The Slow Professor Canadian professors Maggie Berg and Barbara Seeber critique the increasingly administrative nature of academic work and the merits of traditional education. They call on professors to actively resist the corporatization of academia and to force academic institutions to make time for slow, creative thought. Rather than spending time writing emails or publishing results as quickly as possible, they argue that professors (as well as other higher ed professionals) should instead take the time to cultivate scholarly value. At face value, it would seem that this sentiment resists the trend in digital transformation, which often requires workers to leverage new technologies to increase productivity, often at the expense of our work life balance.

It doesn’t have to be this way. Though Ford ultimately used their efficiency gains to reduce the number of accounts payable employees by 75%, universities (as non-profit institutions) do not have to follow suit. We have a tremendous opportunity to use digital transformation to automate administrative work and free up time for what matters.

For instance, with proper e-learning capabilities, educators could create flipped classrooms complete with pre-recorded lecture components and (partially) automated evaluation, so that faculty can spend more time guiding hands-on experiences with smaller in-person tutorials. Such a digitally enhanced classroom would automate the boring stuff and free up time to do the things that matter: providing meaningful mentorship and formative experiences for students. Digital technologies can similarly be used to reduce the number of forms that need to be filled out, efficiently document information that would otherwise be repeatedly sent in dozens of emails, or to give data-driven insights into emerging problems before they arise.

Digital transformation is not new. Colleges and universities simply need to embrace a culture where digital transformation is possible. They must also avoid the trap of providing watered-down MOOCs and instead focus their information strategy on maximizing the qualities that defined the university experience in the first place. By doing this, colleges and universities can enhance the best qualities of higher education in a time that seems faced with insurmountable challenges. The transformation would be painful, but it would also be worth it.

The practical value of basic PhD research

There is a growing awareness about the downsides of pursuing a PhD. During my time as a PhD student, I was regularly reminded about the challenges of the academic job market and about the doctorate’s poor financial returns on investment.

Though there are also contrary views about the degree’s value, these views often point to the practical research skills that PhD students develop and their relevance to industry. Today, PhD students are often encouraged to conduct applied research, so that they can easily communicate and transfer the results of their work.

One of the overlooked practical benefits of the PhD degree is that it offers students the opportunity to conduct curiosity-driven basic research. PhD students often have the option to focus on answering fundamental questions about why or how the universe is, the answers of which might not be easily applied to a particular industry. Counter-intuitively, I argue that basic research has economic benefits, and that these benefits should be considered when assessing the practical value of PhD studies.

Risk and uncertainty

My argument is rooted in the difference between risk and uncertainty, as originally articulated by economist Frank Knight. Risk concerns situations when we do not know the outcome of a situation but can measure the odds. For example, researchers might not have discovered the best artificial intelligence algorithms for predicting air quality, but can predict that a solution is possible; the odds of success are known.

Uncertainty on the other hand occurs when the odds of success are unknown. Basic research pursues questions with uncertain outcomes and unpredictable practical value. This often leads to the perception that money spent on basic research is wasted. However, the impacts of basic research are occasionally great, and paradoxically have huge practical applications.

One example was the research of Canadian-English computer scientist Geoffrey Hinton at the University of Toronto. In the 1980s, Hinton conducted research in artificial neural networks, what were then seen as curiosities with little practical value. Today, neural networks are the backbone of a new generation of artificial intelligence behind technologies ranging from Apple’s Siri to automated cancer detection systems.

A second example is Canadian physicist Donna Strickland, who recently won a Nobel Prize for her work in chirped optical pulses. Strickland, whose Nobel Prize winning work begun as a doctoral student, later reflected on how it took at least a decade for practical applications of her research to come into view.

Strategies for fostering basic research

Both Hinton and Strickland conducted basic research early in their academic career which did not yield practical applications until decades afterwards. In both cases, the future practical benefits to society were unknown and could not have been known or quantified at the time.

PhD studies offer the benefit of full-time research, which can become scarce later in a scholarly career. The PhD can be a vehicle for conducting and cultivating curiosity-driven research—a privilege that is not experienced elsewhere in society, yet has unique benefits.

The potential practical value of basic research, especially at the PhD level, should therefore be considered by research policymakers. However, policymakers may find it challenging to finance research activities when we cannot easily quantify the outcomes.

We can take steps to maximize the effectiveness of basic research by drawing on the ways entrepreneurs and angel investors deal with uncertainty. For example, policies can emphasize supporting a large number of promising or interesting PhD projects, rather than offering more financing for the best ones. This is a common approach taken by successful angel investors.

Alternatively, policymakers can attempt to maximize serendipity, which has been identified as aiding unintended discovery. Encouraging collaboration of PhD students across or between disciplines could increase the chance of discovery. Interdisciplinary PhD programs might further offer a way to encourage students to tackle some of the pressing issues of today’s world, which are often interdisciplinary in nature.

Regardless of approach, we must stop viewing the value of a PhD degree as purely intangible and recognize the economic and practical value it plays in basic research. Though not all PhD students can be expected to become Hinton or Strickland, basic research at the PhD level enabled their success. It can continue to enable future generations of researchers too, if we allow it.

Reflections about PhDs on the eve of a defence

Despite my noble intentions with respect to this blog, I have been largely unsuccessful at adding content. There are many reasons for this, such as the aggressive teaching and publishing deadlines from last semester. However, a big part of the reason for this is my looming thesis defence, due to happen tomorrow at 10:00 am. I thought it would be fitting to reflect a bit on what it means to do a PhD and what the last four years have entailed, in hindsight.

Having done more than my fair share of university degrees, I can attest to how the PhD is quite different from the others. There are clear financial reasons for pursuing Masters or professional programs. However, in many disciplines PhDs often come with few financial rewards. According to the US Census Bureau, in many fields with robust professional programs (i.e. Business or Law), median earnings among PhD holders are lower than their professional counterparts (i.e. MBA, JD). When one considers that the opportunity cost of doing a PhD 3 to 7 years of productive labour, it becomes clear that the motivation for doing a PhD is often not financial.  Realistically, the PhD only equips students to do one thing: make a substantial contribution of research to the academic community in the discipline students have decided to pursue. Though some PhDs also require students to gain teaching experience, this is not mandatory in all PhD programs.  When it is mandatory, students could expect to spend hundreds of hours teaching a course or working as a teaching assistant. This is small compared to the thousands of hours spent cultivating research.

The best analogy of a PhD that I have yet encountered was written by Tad Waddington in Lasting Contribution. His quote reads:

The last step of the [education] process is to contribute to knowledge, which is unlike the previous steps. Elementary school is like learning to ride a tricycle. High school is like learning to ride a bicycle. College is like learning to drive a car. A master’s degree is like learning to drive a race car. Students often think that the next step is more of the same, like learning to fly an airplane. On the contrary, the Ph.D. is like learning to design a new car. Instead of taking in more knowledge, you have to create knowledge. You have to discover (and then share with others) something that nobody has ever known before.

When I first read this quote three years ago, it stuck with me. I had the good fortune of having pursued two master’s degrees before starting my PhD and had originally thought that the PhD would be like a more advanced version of the previous two. Looking back, I don’t believe that was the case, and agree with Waddington more than ever. If you are considering ever doing a PhD, I recommend that you should be the sort of person who enjoys spending a ridiculous amount of time and energy to make a difficult, small, yet very real contribution to human knowledge.

It’s been a pleasure to have the opportunity to pursue and cultivate interdisciplinary research which I feel truly does break down barriers between disciplines. It has been challenging and at times grueling, but also rewarding, and I believe I have come out a better person. I would like to thank everyone for supporting me through the journey. I wouldn’t do it any differently if I could do it all again.

Why MOOCs are bad and what we can do about it

Perna, L., Ruby, A., Boruch, R., Wang, N., Scull, J., Evans, C., & Ahmad, S. (2013, December). The life cycle of a million MOOC users. In MOOC Research Initiative Conference (pp. 5-6).

In 2011, Sebastian Thurn and David Evans had the bright idea of recording their Stanford University lectures and posting them on the internet. The initiative was incredibly successful and hundreds of thousands of students flocked over the broadband highways to learn about artificial intelligence from two of the greatest minds in the field. In fact, their original initiative was so successful that Thurn later left his job to fund Udacity, today one of the most innovative and disruptive forces in university education. By 2012, it seemed that the whole world was talking about Massive Open Online Courses (“MOOCs”) and that MOOCs were on the path to transforming how teaching was done forever by providing the highest quality education to everyone, everywhere, for free.

This story has a deeply personal element for me. Then an unemployed (or at times underemployed) philosophy grad, I had to make a decision about whether to go back to school to pursue yet another graduate degree, this time in computer science. I sometimes wonder what my life would have been like if I had decided to take a year off and gorge myself on an intellectual mash of Stanford videos on machine learning. I usually conclude that it would have been for the worse. I am very thankful that I ended up going back to school because I probably learned a lot more than I would have otherwise. In late 2013 and early 2014, a number of quantitative studies were published that were like a wet blanket over silicon valley’s burning fire for MOOCs. Researchers at Penn, for instance, found that as few as 5% of MOOC registrants actually finish their courses, while only a fraction of those attain high grades. What’s worse is that MOOC users were found to disproportionately come from educated, male and wealthy backgrounds, largely in the USA. So much, then, for the fad that was the MOOC revolution. Or so the story goes.

Why do MOOCs suck so much at teaching the people that they are trying to help? One of the many reasons is that they are not well-designed. Robert Ubell from NYU has been doing e-learning a long time and thinks that MOOCs suck because they were not designed to keep users engaged, like a good teacher would. Ubell points to active learning, a theory that getting students deeply involved in the learning process will produce better outcomes. For example, active learning holds that asking students questions during a lecture would produce better results because students are more deeply engaged in the process. By involving students, we can better keep their attention, which is one of the fundamental brain mechanisms governing learning. MOOCs suck at knowing when you are paying attention. Good teachers know this by the glazed look in their students eyes as their attention drifts into the mental netherworld between the classroom and PewDiePie’s latest embarrassment.

If we had a good way to measure attention, we would have a way of improving MOOCs. The problem is that scientists do not yet have reliable ways of measuring attention through a computer. Sure we can look at clicks or scrolls, but do clicks really tell you much when you are rewarded by faking it? Alternatively, we could ask you whether you are paying attention, but this will disrupt the course experience. This is why I am looking at brain data. If we watch people’s brains, we can reliably understand when they stop paying attention, and maybe build MOOCs that teach better. It’s a bold idea, but if it works, we could develop technologies that achieve this original vision: quality education for everyone, everywhere, for free.