The secret of our success (with online learning)

In The Secret of Our Success anthropologist Joseph Henrich outlines his hot take on why humans rose to become the ecologically dominant species: our ability to learn from each other. For many of us steeped in cognitive science, this position is surprising. We would expect the reason to be that human beings are just plain smarter. After all, humans seem to be considerably better at cognitive tasks, such as learning and processing large amounts of information. Prominent thinkers such as Steven Pinker have advanced this view, holding that human brains have been well adapted to be versatile.

However, there is interesting evidence that humans are not particularly smarter than our closest genetic relatives. In a series of studies by Esther Hermann and her team, who compared the intelligence of apes and two year old humans, it was demonstrated that infants do not fare better than chimpanzees at most cognitive tasks (e.g. quantities, causality) nor at spatial reasoning (e.g. object permanence). However, infant humans demonstrated considerably better social cognition. Adult chimps did not fare better than juvenile chimps at these tasks, but adult humans excel at them. It follows that an infant’s capacity for social learning is a major factor in their ability to learn new things, such as how to perform at cognitive tests. Henrich argues that this phenomenon also explains the history of European castaways, whose survival rates were not best determined by their years of provisions and time to learn how to survive, but whether they were able to befriend and learn from local Indigenous communities.

If we accept Henrich’s argument, it follows that what makes humans so successful is their capacity for cultural learning. Humans excel at learning passively by picking up on subtle cues through established social connections, or through the cumulative lens and context of one’s society. Our societies shape us and allow us to transfer knowledge that would have been impossible to accumulate on one’s own across a lifetime. Philosophers such as Hans-Georg Gadamer and John McDowell had a similar way of describing this phenomenon as bildung or cultural learning and argued that it was through a process of lifelong learning and education that knowledge was possible. Culture can also extend beyond a nation and permeate to subcultures or organizations.

I think that this view has consequences for the way we understand information technologies in organizations. When we implement a technology, we often consider factors such as its design, its usability, user training, and effective change management during its implementation. However, we don’t often consider the culture of an organization and the implications it has for learning new technologies.

Online learning during Covid-19 was an interesting case study in cultural learning with respect to information technology. In a study that we conducted at Dalhousie University during the pandemic, we conducted a survey of students’ experience to understand the factors that led to satisfactory online learning. Prior to the pandemic, our university had taught primarily in-person, and few people in our organization had experience either teaching or learning using online tools. We ultimately found that students’ perception of the difficulty of learning online was a strong predictor of dissatisfaction, and that this dissatisfaction is explained partially by whether someone perceived that they had the necessary technical skills.

This leads me to ask whether our collective experience through the pandemic would have been better if our organization had already developed a culture of learning online? Internal surveys later revealed that students at Dalhousie University were largely dissatisfied with emergency response though also that the Faculty of Management fared better than most. In hindsight, our elevated satisfaction was possibly explained by our faculty’s prior experience with online learning through its blended learning programs, as well as its focus on implementation of teaching technologies, such as ERPsim.

The culture of an organization thus plays an important part in supporting its stakeholders when they develop new skills or capabilities. Moving forward, we should ensure that we push to make technological innovation a key part of our identity, not just in branding. It is imperative that we create cultures where individuals have resources (including time) for innovation, where failure is tolerated, and where teams can be rewarded for taking risks. In doing so, we could create organizations that are better equipped for radical circumstances.

Leaders should focus on building the right culture. They should also heed Drucker’s wise words: “culture eats strategy for breakfast”. This adage extends well beyond online learning; culture might just be the most important factor in building a successful organization.

Do brain-computer interfaces raise new privacy concerns?

Imagine being able to control machines using your thoughts alone. This idea inspired many popular science fiction franchises such as The Matrix or Pacific Rim. In these stories, humans are able to interface with digital machines that detect brain signals, which gives the characters superhuman powers. For many, these brain-computer interfaces (BCIs) seem futuristic and fantastic, which may explain the hype behind recent innovations such as Elon Musk’s Neuralink.

However, BCIs are not new, and neither are most of the concerns that they raise. Humans have been able to control computers with brain signals since the early 1990s. Existing BCIs often leverage non-invasive electroencephalography (EEG), which simply sits on someone’s head, and usually uses machine learning to detect changes in brain patterns. These detected brain patterns have been applied to a limited number of functions such as spelling applications and video games. Nonetheless, research in non-invasive EEG, including ongoing research in our group at Dalhousie University, has yielded to various innovative applications, such as detecting mind wandering when watching online long lectures.

For some individuals, such as people with locked-in syndrome, BCIs are not just novel technologies but life-enhancing devices through which they can interact with the world. Even a limited EEG-based BCI can enable someone suffering from paralysis to regain a degree of freedom. This in-turn inspired medical researchers and biomedical engineers to develop research into BCI that was much more functional, by leveraging brain implants. The implants that have since been developed have enabled patients who were otherwise paralyzed to command prosthetics, often far more precisely than they could with an EEG-based BCI. While invasive BCI offers a much higher degree of functionality than non-invasive BCI, progress was limited because these brain implants require surgeons to drill holes in a patient’s skull, which is a high-risk surgery.

BCI technology is now changing quickly because there are emerging technologies that are both comparatively less invasive and highly functional. On July 6th, a company called Synchron became the second company in history to receive US Food and Drug Administration approval for clinical trials of an invasive permanent BCI. While not the first to implement permanent surgical BCI (that is, BCI that fits under the scalp, rather than on the scalp), Synchron’s solution is implemented through blood vessels, which makes it much easier to deploy and less risky than transcranial surgery. In the coming months, Synchron plans to conduct a wider trial which, if successful, will allow them to sell the implant as a medical device. This development would mark a major milestone in the advancement of BCI technology, as it would enable digital programs to interface with smaller collections of neurons, and in-turn enable wider range of people to precisely control robotic prosthetics with their brains. It is reasonable to expect a greater proliferation of the technology in medical applications, and possibly to commercial applications in the future.

With this milestone there will also be some new legal and policy challenges. While not qualitatively different from existing BCI technologies, I argue that the complexity and sensitivity of data warrant new privacy considerations, which past technologies did not.

The complexity raises specific concerns with privacy and consent, especially with potential commercial applications of implanted BCI. Since the development of the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada has affirmed principles of informed consent and limiting use of data. The recently proposed Consumer Privacy Protection Act, which will supersede PIPEDA, will expand on these principles to include a more rigorous expectation of documenting consent, similarly to GDPR guidelines.

The complexity of the new systems that could be developed from the new generation of BCI would present challenges with informed and documented consent. Machine learning systems that use this type of brain data would be very complex, involving deep learning that is non-transparent and often developed from other individuals’ data. Companies and watchdogs may wish to consider guidelines to manage the clear communication of the nature of the data that would be collected, its use, and the limits of its use. It may also motivate the application of explainable AI solutions to BCIs, which would help users understand exactly what their data is being used for.

The new generation of BCI will likely collect a range of data that is more sensitive than non-invasive BCI. With EEG-based BCI the signals that are detected are generated from a limited range of neural patterns which are difficult to identify as belonging to a particular individual or diagnose an illness. Many participants of EEG studies even consent to publishing their brain data publicly on the internet, given that it is impossible to link this data to a specific individual. However, with an endovascular BCI, the signals will be much less limited, and it would be more likely to identify a specific individual.

This may raise new issues with respect to Canada’s privacy torts. Following the ruling of Jones v Tsige 2012, Ontario recognized damages that could be incurred from the improper access of personal data. The courts have since established that the possible damages are also related to the degree of sensitivity of the data. While it is not immediately clear to what degree someone’s brain data in this context could be considered sensitive, it is clear that there would be at least some sensitivity concerns analogous to health data. Whether this would be categorically different from, say, a Fitbit, would likely hinge on precisely how identifiable the data would be and what types of health information could be inferred from the data.

As the technology develops, it will become increasingly important for legal and policy professionals to understand the nuances of this technology. BCIs are no longer a matter of science fiction and are quickly becoming much more sophisticated. Moving forward, Canadians would benefit by considering these in light of recent advancements. While these will not be categorically different form existing technologies, we should take care to consider the implications of their data complexity and sensitivity.