Some reflections on collaborations in light of CHI 2023

I “grew up” in an academic discipline where people tended to work alone. A philosophy student typically writes self-authored reflective papers. While mature philosophers definitely understand the importance of collaboration, my studies led me to believe, at least early on, that someone’s greatest work is achieved alone.

As I got older, I realized how important it is to have fantastic teams and communities. The importance is more than just the synergy between people (i.e. sum that is greater than the parts). It’s also how some of humanity’s best attributes are unlocked.

Working together allows you to understand the world in new and sometimes profound ways. These understandings help us advance. Our species’ ‘secret sauce’ is not our intelligence, but how our cultures transfer new ideas and ways of working in the world. Humans generally don’t invent things by sitting around in isolation; they leverage the learnings of others both past and present.

I think this idea captures part of the spirit of communities like ACM CHI and make them great. It was an honour to attend #chi2023 in Hamburg and see first hand how teams can work together to really push the needle. It is also a fantastic privilege to have great collaborators like Anika and Aaron.

By working together, this community leverages information technologies to make fantastic new possibilities for humans. Many of CHI’s leaders are not just committed to advancing themselves, but to an ever-increasing and diverse community. This can help us all leverage our greatest strengths in the long-term.

There’s still a lot of work to do. I am thankful for collaborations and look forward to building new ones. Thank you to ACM CHI for being awesome!

Do brain-computer interfaces raise new privacy concerns?

Imagine being able to control machines using your thoughts alone. This idea inspired many popular science fiction franchises such as The Matrix or Pacific Rim. In these stories, humans are able to interface with digital machines that detect brain signals, which gives the characters superhuman powers. For many, these brain-computer interfaces (BCIs) seem futuristic and fantastic, which may explain the hype behind recent innovations such as Elon Musk’s Neuralink.

However, BCIs are not new, and neither are most of the concerns that they raise. Humans have been able to control computers with brain signals since the early 1990s. Existing BCIs often leverage non-invasive electroencephalography (EEG), which simply sits on someone’s head, and usually uses machine learning to detect changes in brain patterns. These detected brain patterns have been applied to a limited number of functions such as spelling applications and video games. Nonetheless, research in non-invasive EEG, including ongoing research in our group at Dalhousie University, has yielded to various innovative applications, such as detecting mind wandering when watching online long lectures.

For some individuals, such as people with locked-in syndrome, BCIs are not just novel technologies but life-enhancing devices through which they can interact with the world. Even a limited EEG-based BCI can enable someone suffering from paralysis to regain a degree of freedom. This in-turn inspired medical researchers and biomedical engineers to develop research into BCI that was much more functional, by leveraging brain implants. The implants that have since been developed have enabled patients who were otherwise paralyzed to command prosthetics, often far more precisely than they could with an EEG-based BCI. While invasive BCI offers a much higher degree of functionality than non-invasive BCI, progress was limited because these brain implants require surgeons to drill holes in a patient’s skull, which is a high-risk surgery.

BCI technology is now changing quickly because there are emerging technologies that are both comparatively less invasive and highly functional. On July 6th, a company called Synchron became the second company in history to receive US Food and Drug Administration approval for clinical trials of an invasive permanent BCI. While not the first to implement permanent surgical BCI (that is, BCI that fits under the scalp, rather than on the scalp), Synchron’s solution is implemented through blood vessels, which makes it much easier to deploy and less risky than transcranial surgery. In the coming months, Synchron plans to conduct a wider trial which, if successful, will allow them to sell the implant as a medical device. This development would mark a major milestone in the advancement of BCI technology, as it would enable digital programs to interface with smaller collections of neurons, and in-turn enable wider range of people to precisely control robotic prosthetics with their brains. It is reasonable to expect a greater proliferation of the technology in medical applications, and possibly to commercial applications in the future.

With this milestone there will also be some new legal and policy challenges. While not qualitatively different from existing BCI technologies, I argue that the complexity and sensitivity of data warrant new privacy considerations, which past technologies did not.

The complexity raises specific concerns with privacy and consent, especially with potential commercial applications of implanted BCI. Since the development of the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada has affirmed principles of informed consent and limiting use of data. The recently proposed Consumer Privacy Protection Act, which will supersede PIPEDA, will expand on these principles to include a more rigorous expectation of documenting consent, similarly to GDPR guidelines.

The complexity of the new systems that could be developed from the new generation of BCI would present challenges with informed and documented consent. Machine learning systems that use this type of brain data would be very complex, involving deep learning that is non-transparent and often developed from other individuals’ data. Companies and watchdogs may wish to consider guidelines to manage the clear communication of the nature of the data that would be collected, its use, and the limits of its use. It may also motivate the application of explainable AI solutions to BCIs, which would help users understand exactly what their data is being used for.

The new generation of BCI will likely collect a range of data that is more sensitive than non-invasive BCI. With EEG-based BCI the signals that are detected are generated from a limited range of neural patterns which are difficult to identify as belonging to a particular individual or diagnose an illness. Many participants of EEG studies even consent to publishing their brain data publicly on the internet, given that it is impossible to link this data to a specific individual. However, with an endovascular BCI, the signals will be much less limited, and it would be more likely to identify a specific individual.

This may raise new issues with respect to Canada’s privacy torts. Following the ruling of Jones v Tsige 2012, Ontario recognized damages that could be incurred from the improper access of personal data. The courts have since established that the possible damages are also related to the degree of sensitivity of the data. While it is not immediately clear to what degree someone’s brain data in this context could be considered sensitive, it is clear that there would be at least some sensitivity concerns analogous to health data. Whether this would be categorically different from, say, a Fitbit, would likely hinge on precisely how identifiable the data would be and what types of health information could be inferred from the data.

As the technology develops, it will become increasingly important for legal and policy professionals to understand the nuances of this technology. BCIs are no longer a matter of science fiction and are quickly becoming much more sophisticated. Moving forward, Canadians would benefit by considering these in light of recent advancements. While these will not be categorically different form existing technologies, we should take care to consider the implications of their data complexity and sensitivity.

Three tips on how to learn well during COVID-19, backed by brain science

If you are reading this, you are likely aware that most universities in Canada are transitioning to an online format for the Fall 2020 semester. This has presented many challenges for professors because teaching online is totally different! We are putting a lot of effort into learning how to teach effectively. However, in my many discussions about online learning over the past few months, we haven’t spent a lot of time taking about the challenge: how to effectively prepare students for this transition. I thought I would share three tips on how to be an effective online learner, some of which has come from my research.

Tip 1: Take micro-breaks regularly

I recently re-analyzed some of my PhD thesis data and found something interesting. In my PhD thesis, I described studies of people’s brains patterns as they attended long lecture videos. The software I created also asked them questions periodically throughout the video. When asked at the 15 minute mark, participants reported being largely on task, though at the 30 minute mark they reported a significantly higher degree of mind wandering. I also found that the degree of reported mind wandering significantly impacted how well students learned from the lecture. Long lectures are hard to learn from! This isn’t news.

What I recently discovered is that there are some brain patterns that predict mind wandering pretty accurately. The brain patterns were significantly higher levels of delta waves (which are associated with sleepiness) and alpha waves (which are associated with meditation and self-directed thoughts). Clearly, the longer you focus on a lecture video, the more likely your brain will veg out and focus on other things. Though this does not prove that taking breaks would improve learning, the pattern is clear, and disrupting this pattern may result in better learning. The picture below illustrates the degree of these waves when in states of being “completely on task” versus “completely mind wandering”.

Regular breaks every 20 minutes or so may prevent your mind from wandering. When your mind is focused, you learn better. 

Figure: Differences in delta (sleepy waves) and alpha (meditation waves) when your mind is wandering. Minds are more likely to wander as a lecture progresses.

Tip 2: Use multimedia that is most effective for you

Some people like learning from books while others love YouTube videos (I am guilty!). However, for a serious online learner, it is usually best to have a combination of tools at your disposal. Education scientist Richard Mayer spent most of his career explaining how and why this works. In Multimedia Learning he argued that “humans possess separate information-processing channels for visually represented material and auditorily represented material.” In other words it is better to learn from a combination of pictures and words is better than either pictures alone or words alone.

Our brains are organized in a series of networks that conduct and compute various sensory and processing tasks. Many of these networks have common features (e.g. audio and visual attention networks), but ultimately use different tools to do their job. If you push one network too hard, it is difficult to effectively abstract information into knowledge. However, by distributing the workload, your brain can more effectively abstract experiences and information to learn. Different people have different capacities, which may be one of the factors influencing preferred learning styles.

Sometimes however, online learning is hampered by language barriers, or perhaps by a poorly delivered lesson. When this happens, a lesson generates what John Sweller called extraneous cognitive load. Though we need a bit of challenge to effectively learn, it can seem impossible difficult material is delivered in a hard-to-understand format. When this happens, it is extra-important to find other resources which can supplement a lesson. If you are lucky, these resources would be  in a format that works for you. Fortunately, in a world of YouTube and Coursera, we have no shortage of content to choose from.

If you find it hard to learn from a video, crack open your textbook. If you can’t find answers, don’t hesitate to search for other outside resources or reach out to your teaching team for help.

Tip 3: Be intentional about socializing online

Finally, one of the great challenges presented by the COVID-19 situation in Canada is that we will not be able to hold in-person social activities. In some ways, it almost feels like we live on spaceships, alone-yet-together; what YouTube creator C.G.P. Grey called “Spaceship You”. For many students, “Spaceship You” is challenging because good learning experiences are shaped in large part by your community of learning.

E-learning scholars have noticed that “social presence” is a critical component of e-learning success. In a classic paper by Johnson et al. (2008) it was demonstrated that satisfactory e-learning environments were identified as 1) personal, 2) sociable, 3) sensitive, 4) warm and 5) active. A more recent meta-analysis on the role of social presence in online learning has supported this finding and it is clear that social presence is important for learning online.

The challenge with social presence is that though instructors play a critical part in facilitating it, students must also take initiative to create such an environment. At Dalhousie, we have plans to implement some pretty new technology to help with this (which will be announced later this summer). However, technology is only good if it is actually used. It will be critically important for both students and instructors to be actively engaged on technology platforms.

University is a social experience. Be active on your course chat and other online community platforms. 

The fall semester will certainly be different than any of the others that have come before. Though there are challenges, there may also be unique opportunities for personal and professional growth. We don’t have all of the answers, but we are working hard to create a good learning experience. I am looking forward with an open mind; with any luck, 2020 will be remembered as year of unprecedented internet innovation, not just disruption.

Why cognitive enhancement technologies do not need new regulation

After digging through old files from my PhD, I ran into a copy of a class presentation I worked on with Sarah Macleod and Mohammad Habibnezhad on cognitive enhancement technologies. Examples of cognitive enhancement technologies could be cognitive enhancing drugs such as Ritalin, or even whatever the heck Elon Musk is working on at Neuralink; what they have in common is that they make us better at thinking or learning. One of the potential future applications of my thesis work is the development of computer-based education that adapts to users attention. Such technologies may one day deliver education radically better than either our current MOOCs or even in-person lectures, and is therefore an example of a cognitive enhancement technology.

The presentation that we gave concerned whether cognitive enhancement technologies should be strictly regulated. We argued that cognitive enhancement technologies do not require additional regulation. Our argument was best summarized by the following chart:

Cognitive enhancement technologies can seem scary, but there already already existing frameworks to understand them. In Canada, the Food and Drugs Act governs all food, drugs, or medical devices, including any devices that modify or correct the body structure of humans. Both cognitive enhancing drugs and devices which modify the functioning of the human brain (as in Musk’s case) would therefore be governed by the Food and Drugs Act. This regulatory regime is fundamentally designed to ensure the safety of such technologies, or failing that offer sufficiently great rewards for the expected risks. If such high risk technologies are unable to offer great benefits to their users, they are likely to be banned.

I find the lower half of the matrix more interesting. When technologies are low risk to humans, we can envision them as either goods or rights. For example, there is pretty good evidence that coffee is a cognitive enhancer. There is some evidence that caffeine provides benefits to learning and memory, and even better evidence that it improves reaction time. However, the benefits of caffeine are slight. In market economies, we normally think of these sorts of things as ‘goods,’ insofar as they satisfy a consumer’s want. The main benefit of coffee is that it satisfies my desire for coffee; potential cognitive enhancement is secondary.

Sometimes however, goods can be  low risk and offer high advantages, so much so that not having them will disadvantage a person’s capabilities to be a functioning member of society. For example, access to primary and secondary education is categorically different from access to coffee. People who do not have access to primary and secondary education are severely limited in their ability to take part in society. Children who have access to education can learn skills required to participate in the economy or polity, or potentially choose to pursue tertiary education of their choosing. Those who do not have access are severely limited in their capabilities and agency and will never be able to choose how to contribute to society. This idea is better summarized by Amartya Sen and Martha Naussbaum, and is often called the capability approach to rights.

I believe that low risk, high benefit cognitive enhancement technologies may fall into this category of ‘rights’ if they truly offer large political or economic advantages to their consumers. If we imagine a radically better way to teach students using learning technology, students who used such hypothetical technology would have significantly greater capabilities in society. They could potentially learn in a fraction of a time, becoming much more productive and potentially much more competitive. People who do not have access to this hypothetical technology would likewise be severely limited in their capabilities. We would therefore consider universal access to such cognitive enhancing technologies, at least if they become adopted at a large scale. These days, there are even business models that incentivize both innovation and free access to such high benefit technologies. Such business models might serve as a starting point for such technologies as they eventually make their way to open access.

Long story short, this is why I believe that cognitive enhancing technologies do not need additional regulation. If they are high risk technologies, we have existing regulation that covers them. If they are low risk, they are either goods or rights depending on their benefits; we have exiting methods of distributing these. Black Mirror will have to wait on this one.