Private messages contribute to the spread of COVID-19 conspiracies

Written by , Ryerson University; , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

Private messaging apps allow information to spread in an unchecked manner.

The COVID-19 global pandemic has been accompanied by misinformation about the virus, its origins and how it spreads.

One in seven Canadians thinks there is some truth to the claim that Bill Gates is using the coronavirus to push a vaccine with a microchip capable of tracking people. Those who believe this and other COVID-19 conspiracy theories are much more likely to get their news from social media platforms like Facebook or Twitter.

In extreme cases, conspiracy thinking spurred by online disinformation can result in hate-fuelled violence, as we saw in the insurrection at the U.S. Capitol, the Québec City mosque shooting, the Toronto van attack and the incident in 2020 where an armed man crashed his truck through the gates of Rideau Hall.

Moderate content

These and other events have placed pressure on social media platforms to label, remove and slow the spread of harmful, publicly viewable content. As a result of implemented responses to the spread of misinformation, Donald Trump was deplatformed during the final weeks of his presidency.

These discussions on content moderation have mainly centred around platforms where content is generally open and accessible to view, comment on and share. But what’s happening in those online spaces that aren’t open for all to see? It’s much harder to say. And perhaps not surprisingly, conspiracy theories are spreading on private messaging apps, like WhatsApp, Telegram, Messenger and WeChat, to spread harm.

By leveraging large groups of users and long chains of forwarded messages, false information can still go viral on private platforms.

White nationalists and other extremist groups are trying to use messaging apps to organize, and malicious hackers are using private messages to conduct cybercrime. False stories spreading on messaging apps have also led to real-world violence, as happened in India and the United Kingdom.

Trust and private communication

We conducted a survey of 2,500 Canadian residents in March 2021 and found that they’re increasingly using private messaging platforms to get their news.

Overall, 21 per cent said that they rely on private messages for news — up from 11 per cent in 2019. We also found that people who regularly receive their news through messaging apps are more likely to believe COVID-19 conspiracy theories, including the false claim that vaccines include microchips.

There is a level of intimacy in private messaging apps that’s different from news viewed on social media feeds or other platforms, with content shared directly by people we often know and trust. A majority of Canadians reported that they had a similar level of trust in the news they receive on private messaging apps as they do in the news from TV or news websites.

Our research also uncovered a uniquely Canadian phenomenon. As a multicultural society with many newcomers, the Canadian private messaging landscape is remarkably diverse. For example, people who have arrived in Canada in the last 10 years were more than twice as likely to use WhatsApp. Similarly, newcomers from China were five times more likely to use WeChat.

We also found that half of Canadians receive messages that they suspect are false at least a few times per month, and that one in four receive messages with hate speech at least monthly. These rates were higher among people of colour. Because different apps provide different ways of spreading and mitigating harmful content, each requires a tailored strategy.

A graph showing the self-reported frequency of receiving harmful private messages in a representative survey of Canadian residents.
A graph showing the self-reported frequency of receiving harmful private messages in a representative survey of Canadian residents. Photo credit: Cybersecure Policy Exchange, Ryerson University.

Mitigating harm

Platforms and governments around the world are grappling with the tension between mitigating online harms and protecting the democratic values of free expression and privacy, particularly among more private modes of communication. This tension is only exacerbated by some platforms’ use of privacy-preserving end-to-end encryption that ensures only the sender and receiver can read the messages.

Some messaging apps have been experimenting with how to reduce the spread of harmful materials, including the introduction of limits on group sizes and on the number of times a message can be forwarded. WhatsApp is now testing a feature that nudges users to verify the source of highly forwarded messages by linking to a Google search of the message content. Some experts are also advancing the idea of adding warning labels to false news shared in messages — a concept that a majority (54 per cent) of Canadians supported when we described the idea.

Some examples of private messaging app features that could reduce harms, such as group size or message forwarding limits. Photo credit: Cybersecure Policy Exchange, Ryerson University.

However, there is certainly more that governments can do in this quickly moving area. More transparency is required from messaging platforms about how they’re responding to user reports of harmful material and what approaches they’re using to stall the spread of these messages. Governments can also support digital literacy efforts and invest in research about harms through private messaging in Canada.

As Canadians shift to more private modes of communication, policy needs to keep up to maintain a vibrant and cohesive democracy in Canada while protecting free expression and privacy.

Canada should be transparent in how it uses AI to screen immigrants

Written by , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

The Canadian government’s employment of AI technology needs to be transparent.

Like other governments around the world, the Canadian federal government has turned to technology to improve the quality and efficiency of its public services and programs. Many of these improvements are powered by artificial intelligence (AI), which can raise concerns when introduced to deliver services to vulnerable communities.

To ensure responsible use of AI, the Canadian government developed the “algorithmic impact assessment” tool, which determines the impact of automated decision systems.

Pilot project

The algorithmic impact assessment was introduced in April 2020, and very little is known about how it was developed. But one of the projects that informed its development has garnered concern from media: the Immigration, Refugees and Citizenship Canada’s (IRCC) AI pilot project.

The AI pilot project introduced by IRCC in 2018 is an analytics-based system that sorts though a portion of temporary resident visa applications from China and India. IRCC has previously explained that because its temporary resident visa AI pilot was one of the most concrete examples of AI in government at the time, IRCC directly engaged with and provided feedback to the Treasury Board Secretariat of Canada in the development of the algorithmic impact assessment.

Not much is publicly known about IRCC’s AI pilot project. The Canadian government has been selective about sharing information on how exactly it is using AI to deliver programs and services.

A 2018 report by the Citizen Lab investigated how the Canadian government may be using AI to augment and replace human decision-making in Canada’s immigration and refugee system. During the report’s development, 27 separate access to information requests were submitted to the Government of Canada. By the time the report was published, all remained unanswered.

Minister of Immigration, Refugees and Citizenship Ahmed Hussen responds to questions about Canada’s use of artificial intelligence to help screen and process immigrant visa applications during question period in the House of Commons on Sept. 18, 2018. Photo credit: THE CANADIAN PRESS/Adrian Wyld.

The case of New Zealand

While the algorithmic impact assessment is a step in the right direction, the government needs to release information about what it claims is one of the most concrete examples of AI. Remaining selectively silent may lead the Canadian government to fall victim to the allure of AI, as happened in New Zealand.

In New Zealand, a country known for its positive immigration policy, reports emerged that Immigration New Zealand had deployed a system to track and deport “undesirable” migrants. The data of 11,000 irregular immigrants — who attempt to enter the country outside of regular immigration channels — was allegedly being used to forecast how much each irregular migrant would cost New Zealand. This information included age, gender, country of origin, visa held upon entering New Zealand, involvement with law enforcement and health service usage. Coupled with other data, this information was reportedly used to identify and deport “likely troublemakers.”

Concerns surrounding Immigration New Zealand’s harm model ultimately drove the New Zealand government to take stock of how algorithms were being used to crunch people’s data. This assessment set the foundation for systematic transparency on the development and use of algorithms, including those introduced to manage migration.

Conversely, in Canada, advanced analytics are used to sort applications into groups of varying complexity. More specifically, in Canada, temporary resident visa applications are reviewed for eligibility and admissibility.

The Canadian pilot is an automated system trained on rules established by experienced officers to identify characteristics in applications that indicate a higher likelihood of ineligibility. For straightforward applications, the system approves eligibility solely based on the model’s determination, while eligibility for more complex applications is decided upon by an immigration officer. All applications are reviewed by an immigration officer for admissibility.

A report by public broadcaster RNZ on Immigration New Zealand’s data profiling.

Levels of review

For New Zealand, publishing information on how, why and where the government was using AI offered the opportunity to provide feedback and make recommendations. These efforts led to the New Zealand government developing an Algorithm Charter on the use of algorithms by government agencies. More importantly, the public can now understand how the government is experimenting with new capabilities and offer their input.

Although IRCC has been careful in deploying AI to manage migration, there is great benefit in being transparent about its endeavours involving AI. By engaging in open innovation and making information about IRCC’s AI pilot project public, the government can start having meaningful conversations, sparking thoughtful innovation and encouraging public trust in its application of emerging technologies.

How game worlds are preparing humanitarian workers for high-stakes scenarios

Written by , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

The World Health Organization is building a game world to allow medical practitioners to admit virtual patients for emergency treatment during a mass casualty simulation.

The pandemic has bred a new dependence on online technologies for work and social engagement. Immersive technology such as that used in 3D video games, virtual reality and augmented reality can be designed now so that the person experiencing them is transported into a socially rich online world.

This began with the design of massive online role playing games and continues with other platforms for living in an altered digital reality with purposeful activity, such as the platform Second Life.

Introduction to Second Life.

During pandemic shutdowns, online role playing gamers have still had access to extensive social connections with many people in virtual worlds. Players communicated free of charge, with hundreds of other people on the real-time voice server Discord.

The combination of an immersive 3D video game and real-time voice communications created a reassuring space when the external world was cut off.

But game worlds are not just for recreational community. This form of immersion based on a desktop computer experience has now reached the medical and humanitarian fields as well.

Building game world simulations

A game world is an alternate world that can transport users to virtual worlds. This means that in corporate, academic or life coaching settings, people can also learn and practise extending their skills in virtual space.

Some life coaches use game worlds to help people imagining alternative settings or outcomes. Video from consultant Katharina Kaifler.

Perhaps most importantly, a game world is not a game. It has no winning or losing conditions. It is simply an immersive fantasy world that is created with the intention of promoting interaction with its environment. Visiting a game world is like visiting a city or a continent, or even the inside of an emergency room where rules, called game mechanics, govern the players’ abilities. It is not a game in that it is not designed to entertain, but to both entertain and change behaviour.

A game world has a few core essential components. The first is a narrative, or story. If it were used for medical education for example, the world could be the inside of an emergency room.

Game worlds for medical and aid workers

Currently, the World Health Organization (WHO) is building such simulations. By using a 360 camera we can record any emergency department in the world, then translate that into a 3D model which can be viewed on a desk top or enlarged so that the user is standing inside that virtual reproduction.

The WHO Learning Academy is building code to admit simulated patients, each one with its own life path. Virtual lives can be saved by managing the flow of patients during a mass casualty simulation. The software can predict how many minutes can be saved by careful triage.

Game worlds can be more fantastical and oriented to increase enjoyment of learning, particularly when the subject matter is complex. The UN office for Disaster Risk Reduction and the the UN World Food Program have produced video games for learning about the respective issues they respond to.

Currently with the World Food Program, our team — consisting of a video game company, learning professionals and UN subject experts in Rome — is building a fully immersive exploration game. We are working on building a game world for UN staff that will help them learn how to protect vulnerable populations and how to be accountable in their field work.

Learning through game worlds will help some World Food Program staff practise decision making. Here, a worker loads a truck in Les Cayes, Haiti, in November 2016. Photo credit: THE CANADIAN PRESS/Paul Chiasson.

The game world features multiple engagement loops (things to do) that make it attractive to participate in.

When you have to teach the minutia of a 40-page manual in a few hours, a video game world is a sound approach. People’s capacities to recall text they read is limited in the short term, and their memory of it diminishes over the long term, but when people learn procedures through a video game world they show high engagement and retain the information.

Fantasy encompasses a simulation

Two terms have now become essential when describing what happens in game worlds: “autopoiesis” which means self organizing or self generating; and “hyper reality” which is a term developed by French post-modern sociologist Jean Baudrillard referring to “the generation by models” of something real “without origin or reality.”

A game world has its own “digital physics,” not real-world physics, thus separating it from simulation. A game world is a place where new things can be created and the person lives among fantasy objects. Autopoetic hyper reality is a digital space where the player is enticed to complete goals in a fantasy that encompasses a simulation.

In ‘The World of Warcraft,’ players can create a character avatar and explore an open game world. Photo credit: Shutterstock.

Scholars across the field of digital media are now hard at work creating a kind of fusion of the the human nervous system with technology. What this means is that the boundary between one and the other will become imaginary, for example, in the instance of doctors using remote technologies to conduct medical procedures.

But the larger meaning is that as virtual reality continues to mature, we will gradually live more of our lives in digital space. We’ve seen many examples of this through the pandemic, including new uses of Zoom and social media to replace the workplace and face-to-face contact.

Digital game worlds are places we can live, play and work together across great distances while feeling we are in a reassuring place where we connect.

What are NFTs and why are people paying millions for them?

Written by , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

A NFT is a digital file with verified identity and ownership.

Last week, Christie’s sold a digital collage of images called “Everydays: The First 5000 Days” for US$69.3 million dollars. This week, Elon Musk said he’s selling a tweet of his as an NFT, which contains a song about NFTs.

The bidding on Musk’s tweet has already topped $1 million and millions more are pouring into the market — he has since tweeted, “Actually, doesn’t feel quite right selling this. Will pass.” And sites like NBA Top Shot (where you can buy, sell and trade digital NBA cards) have individual cards selling for over US$200,000.

It might sound ridiculous but the explosive market of crypto-collectibles and crypto-art is no joke. I investigate cryptocurrencies and have academic publications on Bitcoin markets. To help you understand what an NFT is and why they’re becoming so popular, here’s an explainer to make sense of it all.

What is an NFT?

A non-fungible token (NFT) is a digital file with verified identity and ownership. This verification is done using blockchain technology. Blockchain technology, simply put, is an un-hackable system based on the mathematics of cryptography. So, that’s why you hear a lot of “crypto” when referring to NFTs — crypto-art, crypto-collectibles, etc.

What is fungibility?

Fungibility is the ability of an asset to be interchanged with other individual assets of the same kind; it implies equal value between the assets. If you own a fungible asset you can readily interchange it for another of a similar kind. Fungible assets simplify the exchange and trade processes, and the best example would be (you guessed it) money.

Is NFT the same as Bitcoin?

This is where I can explain and emphasize the “non-fungibility” property of NFTs. The main difference between NFTs and Bitcoins is the fact that Bitcoins are limited, and fungible (you can trade one Bitcoin with another and both have the same value and price). NFTs are unique but unlimited, and non-fungible (no two artworks are the same). While NFTs can appreciate in value (just like real estate), they cannot be interchanged for another NFT.

Blockchain technology, simply put, is an un-hackable system based on the mathematics of cryptography. Photo credit: Shutterstock.

What does this mean for the future of money?

While not directly related to NFTs, it’s important to mention some properties of money. Among many properties, money has to be fungible (one unit is viewed as interchangeable as another), and divisible (can be divided into smaller units of value). NFTs are not fungible and while Bitcoin is fungible, it is not divisible.

For example, a single dollar is easily convertible into four quarters or ten dimes, but currently you cannot divide one Bitcoin into smaller units. In fact, fungibility and divisibility are part of five requirements for a currency to exist in a regulated economy.

Why are NFTs being valued?

The importance of NFTs lies in providing the ability to securely value, purchase and exchange digital art using a digital ledger. NFTs started in online gaming, later with Nike’s patenting of its authenticity (CryptoKicks) and then by the famous Christie’s auction embracing NFT valuation of a digital art piece.

NFTs are commonly created by uploading files, such as digital artwork, to an auction market. Just like any other form of art, NFTs are not mutually interchangeable, making them more like “collectible” items.

The platform (typically Ethereum) allows the digital art to be “tokenized” and for the ownership to be safely stored using a decentralized, open-source blockchain (that is, anyone can check the ledger), featuring smart contract functionality. This means the traditional role of a “middle man” for selling the art is now digitized.

Is owning the NFTs the same as owning the copyright?

No, owning the NFT doesn’t grant you the copyright to the art; they are distinct from one another. The ownership of the NFT is established using a digital ledger, which anyone can access because it is stored openly. This ledger tracks who owns an NFT and ensures that the NFT can’t be duplicated or tampered with, essentially a “smart contract.”

What does the future hold for NFTs?

It is undeniable that digital assets and blockchain technology are changing the future of trade. As a result, NFTs are also at the helm of this positive growth. However, just like other examples in history (e.g. the Dutch Tulip, the dotcom bubble, etc.), certain valuations may see the need for future corrections depending on socio-economic desires and the chance of a bubble.

Every generation has its own niche attachment to certain valuations whether for vanity or other reasons. NFTs are currently very popular among younger generations, but whether this generation will have the economic power to purchase or find use for them in the future, is both a social and economic question.

For NFTs the true potential is yet to be uncovered. Whether big industry players in art, design or fashion will buy into it or not is also yet to be seen. One thing is for sure, NFTs did open the door for many digital artists to be identified and valued, and the smart contract functionalities of the blockchain technology will be used in future valuations of many assets.

 

Ontario’s plans for COVID-19 contact tracing wearable devices threaten freedom and privacy

Written by , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

Wearable devices can help track the spread of COVID-19 in places where smartphone use isn’t possible.

In February, the Ontario government announced it had invested $2.5 million in wearable contact tracing technology to help curb the spread of coronavirus. The funds will be directed to Facedrive Inc., a Toronto-based company, to accelerate the production of its contact tracing wristbands worn by essential workers.

These wristbands are being tested and considered for wide use in long-term care homes, a First Nation community, airlines, schools and construction sites. They work by communicating with other devices through a combination of Bluetooth and Wi-Fi, sending an alert to any employee who has been in close contact with somebody who has tested positive for the virus. The wristbands also enforce social distancing by vibrating or beeping whenever they are within two metres of each other.

Contact tracing technologies

During the early stages of the pandemic, governments around the world repurposed new and pre-existing technologies in efforts to track, monitor and contain the spread of the virus. Many privacy experts and surveillance scholars feared the expansion of government and corporate surveillance, pointing out the long-term implications for privacy rights, civil liberties and democracy. Contact tracing apps were one of these innovations.

Eager for life to return to normal, a little over half (56 per cent) of Canadians revealed they were willing to use a contact tracing app. Fast forward to today, it remains unclear whether COVID Alert, Canada’s contact tracing app, has been effective in curbing the spread of the coronavirus. As for contact tracing wearable devices, the effectiveness also remains up in the air.

https://www.youtube.com/watch?v=7PcUzip7lt4

A Canadian government video showing how the COVID Alert app works.

Discriminatory applications

The allure of technologies often overshadows the immediate and long-term social, political and ethical implications that Canadians and policymakers need to be aware of.

As someone who has researched surveillance and its histories, the current government proposal is unsettling due to the places and people it targets.

Data shows that workplaces deemed essential services are characterized by highly racialized labour forces; the introduction of surveillance technologies like a wearable bracelet will disproportionately target vulnerable groups who have historically been subjected to disparate forms of surveillance and discrimination. The policy may not only fail to slow the spread of the virus but may actually perpetuate historical legacies of discrimination against vulnerable populations including racialized and low-income groups.

There is also potential for discrimination and bias as a result of the visibility of the devices. More than just an item around the wrist, the device broadcasts a personal decision: to opt-in or opt-out of contact tracing. At the same time, it may act as an identifying symbol for a particular labour force.

Ensuring privacy

Before the COVID Alert app was launched in Ontario in July 2020, federal and provincial governments worked together in consultation with privacy bodies, while independent researchers and experts also offered recommendations. I, and others on our team, made recommendations to enhance COVID Alert’s privacy and security standards, many of which were implemented. Many lauded the app for its privacy-preserving and data security protections. Yet Ontario’s recent decision to invest in wearable contact tracing devices did not receive the same extensive level of public consultation, raising concerns over transparency, privacy and data security.

In a privacy white paper co-authored with the law firm McCarthy Tétrault LLP — available only upon request, making it a challenge for researchers to review —Facedrive claims to have followed Canadian privacy guidelines. But Canadian privacy laws have been largely criticized for being outdated and are not an adequate benchmark for judging a piece of technology’s threat to privacy.

Current privacy laws have been criticized for not clearly defining concepts like “personal information,” allowing tech companies to claim compliance while exploiting ambiguities in the law. Facedrive claims that their wearables “do not contain any personal information” since employee names are mapped with wearable serial numbers and stored on a centralized Microsoft Azure server. Yet the name of the employee whose test is positive and the names of those they interacted with are still available and revealed to employers on an online dashboard.

Further complicating matters, once an employer is logged in to the dashboard, not only are they able to see who has been in contact with whom, they’re also able to assess individual employee risk levels for virus exposure and manually send notifications if they suspect transmission. This means that employers are provided with what is essentially health data, while also taking up the public health role of contact tracer. This raises further questions about employee privacy rights, data security and the ethics of workplace surveillance.

Threats to democracy

Without critical examination and debate, such surveillance practices will have serious implications for civil liberties, especially the rights to privacy, freedom from discrimination and autonomy. The deployment of contact tracing wearables in workplaces will normalize surveillance and lead to its expansion — Facedrive has already indicated its interest in continuing the use of its technologies beyond the pandemic.

Wearable contact tracing devices and the data they collect can threaten our rights, freedoms and even democracy itself. It is vital for the general public, employers and policy-makers to think carefully about the limitations and implications of wearable tracking devices including the ways in which they use, collect and store data as well as how such surveillance operations will dissemble after the pandemic and not be overcome by the technological fanfare.

Ontario’s digital health program has a data quality problem, despite billions in spending

Written by , Ryerson University; , University of Toronto. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

Digital health technology, such as electronic health records, is believed to enhance patient-centred care, improve integrated care and ensure financially sustainable health care.

Digital health is about applying advanced information technologies to enable free flow of patient information across the circle of care. For patients, that means every health-care provider they see at different locations should be able to access relevant health record information quickly and efficiently.

Digital health technology, such as electronic health records, is believed to enhance patient-centred care, improve integrated care and ensure financial sustainability of our health-care system. However, Ontarians are facing the tough reality that their health data are still fragmented, despite billions of dollars spent over the last two decades to enable fast and secure exchange of health information. The COVID-19 pandemic has brought to light even more data quality issues.

As noted in a recent National Post article, much of the public data on COVID-19 is a mess. Not only are data on infected cases and deaths delayed, they are also incomplete. Ontario reportedly offered inconsistent counts between provincial medical officials and local public health units. No wonder the Ministry of Health admits that “consistent standards are lacking across sectors — making it extremely difficult to integrate patient records or to integrate local systems with provincial ones.”

It is a tough pill to swallow after years of investment aimed at enabling fast and secure health data exchange.

Neither sustainable nor effective

The Ontario government is taking two approaches to improving data quality, examples of which include accuracy and timeliness of data reported across different service providers. The first approach centres on improving health data exchange across heterogeneous systems (systems developed by different vendors and requiring different hardware and software configurations to operate) by using common communication standards.

However, this approach is neither scalable nor sustainable as communications across these systems become increasingly complex, time-consuming and error-prone when more systems are added to the mix of systems. Inconsistent counts of COVID-19 infected cases and deaths provided by different levels of governments is a case in point. Not to mention that these standards evolve rapidly and even previous versions of the same standard cannot be easily mapped and migrated to current ones.

The second approach relies on the minimum common data set proposed in the Digital Health Playbook, a resource intended to guide health-care organizations to build their digital systems. The minimum data set contains data classes (such as individual patients) and their corresponding elements (such as date of birth) for clinical notes, laboratory information, medications, vital signs, patient demographics and procedures, to name a few uses.

Health-care providers need fast, secure access to medical records, including clinical notes, lab information, medications, vital signs, patient demographics and procedures. Photo credit: Pixabay.

These data sets, while appropriate for the requirements of family physicians whose main responsibility is disease control and prevention, are not sufficient for treating complex patients who suffer from multiple health issues, which demand a vast amount of health data from various health-care providers.

These two approaches adopted by the Ontario government to address data quality issues are neither sustainable nor effective, so can hardly serve as a strategy guiding health digitalization.

As researchers focusing on IT in health governance, we propose that a data strategy encompass four pillars:

1. Data quality standards

First, data quality is an umbrella term that encompasses multiple dimensions that include things like accuracy, accessibility and timeliness. And there are trade-offs among these dimensions. For example, increasing timely data reports may affect data comprehensiveness, which demands time to cover all the required data.

While “fit for use” (meaning the quality of data fits the requirements of their intended users) is considered appropriate and pragmatic, it needs to be clearly spelled out what quality standards need to be reinforced. Given the limited resources and increasing pressures to curb health-care costs, it becomes increasingly urgent to decide which data quality standards should be the focus.

2. Sustainable, scalable, patient-centric platform

Second, the health-care sector is not alone in dealing with decades-old systems and the low-quality data — such as inaccurate COVID-19 case counts — generated by these systems. Drawing on experiences from banks and other organizations, the health-care sector could create an open data platform that enables data sharing across health-care providers and allows patients to share data from their social media and mobile and wearable devices. Countries such as the United Kingdom and Germany have started implementing the open data platform idea.

3. Measurable indicators of improvement

Third, measurable outcomes pertaining to data quality improvement efforts need to be defined. Improvement efforts could include training programs on best practices related to data entry, and introducing system features that enable data quality checking (for example, completeness or consistency). Measurable outcomes would ensure accountability and the achievement of the intended objectives, and inform future funding decisions.

4. Improvement process adopted by providers

Lastly, a data strategy needs to clearly define a data quality improvement and monitoring process where the quality of the data is continuously monitored and assessed to ensure that data support patient care and research. Data quality is a shared responsibility, so the quality assurance process needs to take place collectively across providers but also within each provider.

To define and implement the data strategy, meaningful engagement with all stakeholders is key. For example, patients and providers need to be involved to identify the data required to treat the diseases that claim the most of our health-care budget, define quality dimensions of the data, and specify roles and responsibilities of maintaining the quality of data.

In contrast to the Band-Aid approach adopted by the Ontario government, the four-pillar data strategy is long-term, focused and holistic. It would ensure that data quality is placed at the front and centre of Ontario’s effort in health digitization. Following the strategy, our health-care system would develop a sustainable mechanism and a scalable capability to continuously improve data quality.

Without such a data strategy, Ontarians will stand to lose another decade and billions more.

Memes like Bernie Sanders’ mittens spread through networks the same way viruses spread through populations

Written by , Ryerson University. Photo credit: Caroline Brehman/Pool via AP. Originally published in The Conversation.The Conversation

Sen. Bernie Sanders (far right) attended the 59th Presidential Inauguration at the U.S. Capitol in Washington on Jan. 20, 2021.

None of us escaped the Bernie Sanders mitten memes following President Joe Biden’s inauguration. The photographer Brendan Smialowski captured the image of Sen. Sanders seated at the inauguration that went viral, resulting in an explosion of thousands of memes that spread rapidly across the world.

Memes aside, we are in the middle of a deadly global pandemic, unlike anything we’ve faced in modern times. At the time of writing, there are more than 100 million COVID-19 cases and two million deaths worldwide. When a person becomes infected with COVID-19, they may infect others physically close to them at their home, workplace or in a crowded public space. Despite mitigating efforts such as physical distancing and face masks, new hotspots of infection may readily appear.

Over the past year, we’ve heard a lot in the news cycle about epidemiology, focusing on the science of how infections spread in populations. Terms like the “R number” and “exponential spread” are now part of our everyday lexicon. Close physical interactions between people are the cause of the spread of viruses like COVID-19 through social networks.

 

View this post on Instagram

 

A post shared by morae Park (🐅) (@morae_2020)

Physical and digital networks

Networks permeate our lives at every level, from the interactions of proteins in our cells to our followers on social media and Bitcoin transactions. Over the past 20 years, a sizeable interdisciplinary field emerged to study what makes networks tick. Network science focuses on the modelling and mining of networks, informed by mathematics, physics and computational sciences.

Networks are collections of dots called nodes and lines called edges representing interactions between objects. Imagine a network with nodes representing people in a city and edges formed by those within two metres apart. Such a contact network maps how contagions like COVID-19 spread.

For a different example, consider accounts on Twitter as nodes, with edges linking to those accounts’ followers. We may then visualize Twitter as a network with 340 million nodes, swarming with tens of billions of edges.

In a network, nodes are linked by edges. Darker nodes in the figure have more edges. Photo credit: Author provided.

Burning networks

If a person becomes infected with COVID-19, they can infect those close to them. From there, the virus may spread to others in their contact network. A challenge with modelling a viral outbreak is that infections do not spread from one person alone but from many sources. Without mitigation, contagion is analogous to a fire burning up a dry forest, wreaking havoc across large areas.

How can we measure the speed of contagion in a network? Viruses and memes inspired the idea of network burning, which measures the speed at which contagion spreads between nodes.

Burning spreads over discrete time-steps, and one new source of burning appears at each step of the process. The latter part is an essential feature: multiple sources pop up anywhere in the network over time. The process ends when every node is burning; for example, the process ends if every person in a population catches COVID-19.

From Bernie Sander’s mittens to Baby Yoda or Mike Pence’s fly, memes appear and spread quickly through social networks such as Facebook, Instagram and Twitter.

When it comes to viral memes, if a user posts a meme on Instagram, it shows up in their followers’ home feed. From there, it appears in the feeds of followers of those followers and outwards from there.

Our intuition is that a few hops are enough to reach anyone on social media, and algorithms prove that right. A 2016 study suggests it only takes four hops on average to connect any two accounts on Facebook. The small world of social networks predicts that popular memes would then reach most accounts in short order.

The minimum number of steps needed to burn every node is called the burning number of the network. We can think of the burning number as a quantitative measure of how fast contagion spreads. The smaller the burning number is, the faster contagion spreads in the network.

Dining in is better for you

Imagine nine diners at a restaurant sitting in close quarters at a round table. In that case, we have a clique network, in which every node links to every other one. If one person carries COVID-19, then the chances are high that all the guests will be infected because they are all within the two-metre range of the infected individual. The burning number of a clique is 1, the lowest it can be.

A clique and line-up with nine nodes are shown. In the clique at the top representing diners at a table, the burning spreads in one step from the node 1. Burning a lineup with nine nodes takes three steps. In the first step, node 1 is burned. In the second step, burning spreads to the left and right of node 1 and we also burn node 2. In the last step, we burn node 3, and the burning spreads to every node. Photo credit: Author provided.

In contrast, think of a lineup at a grocery store with one person infected. The infection potentially spreads only to those directly in front or behind them because the distance from one end of the line to the other is too large for the virus to spread.

For example, in a lineup with nine people, if someone in the middle is infected, it would take four steps to infect everyone. If the infected person is at the end of the line, it takes eight steps. In either case, the spread is slower than for our unlucky, hypothetical diners.

Network burning predicts that lineups are among the slowest kinds of networks for the spread of contagion. If there are n people in a lineup, the burning number is the square root of n. So if nine people are in line, the burning number is three, which is the minimum number of people who must be infected to spread the disease fastest to everyone in the line.

A math conjecture predicts that in any possible network with n nodes, the burning number is at most the square root of n. While no one has proven that conjecture yet, the best-known result is that the burning number of a network is at most the square root of 1.5 times n.

The difference between the square root of n and the square root of 1.5 times n may not seem large, but the gap between them grows considerably for large n. If n is the world’s population of 7.8 billion, then the square root of n is about 88,318, and the square root of 1.5 times n is 108,167.

What the math tells us

Burning networks gives us a simplified but concise view of how contagion propagates in a network, and a measure of how rapidly contagion spreads to each node. While network burning doesn’t directly tell us how to slow the spread of a virus or halt a meme, it highlights that our interactions significantly affect our exposure to contagion.

How networks of interactions are wired has a profound impact on viral outbreaks, a fact especially relevant during these times. Remember that the next time you are in a physically distanced lineup. You are doing your part to slow the spread of COVID-19. And good luck avoiding the next breaking meme.

The math tells us so.

Facebook antitrust battle escalates tensions between government, Big Tech

Written by , Ryerson University; , Ryerson University; , Ryerson University. Photo credit: AP Photo/Jeff Chiu. Originally published in The Conversation.The Conversation

In November 2020 photo, a demonstrator joins others outside of the home of Facebook CEO Mark Zuckerberg to protest what they say is Facebook spreading disinformation in San Francisco.

Facebook made news this week by blocking U.S. President Donald Trump from posting to its platform. A seperate power struggle between government and Big Tech that will be far more consequential in the long term is unfolding in the background. The United States government seems prepared to rein in the social media giant and potentially break up the company.

In December 2020, the Federal Trade Commission (FTC) and attorneys general from 46 states launched antitrust proceedings. After months of theatrical congressional testimony by various tech CEOs, this development represents an escalation that could ultimately determine the balance of power between the U.S. government and Silicon Valley.

Antitrust is the American legal framework for competition law.

In the late 1800s, Standard Oil dominated the American oil refinery market and its chairman, John D. Rockefeller, was thought to have become the richest man who ever lived. In 1890, Congress passed the Sherman Antitrust Act, which was used to break up Standard Oil into smaller companies.

In later years, the market power of AT&T and Microsoft were similarly reined in by the U.S. government.

Antitrust outlaws price fixing, predatory pricing and acquisitions designed to eliminate competition and other practices that hurt consumers, forcing companies to change their practices and structures.

How antitrust applies to Facebook

For anyone familiar with Facebook, it might not be self-evident how antitrust could apply. How can a law designed to prevent price-fixing apply to a free service?

Antitrust is concerned with what monopolistic behaviour by companies costs consumers. Facebook end-users do not pay a subscription price to access the platform, but there are at least three important ways in which Facebook’s business model leverages the company’s immense market power to create costs to consumers.

First, while it’s true that end users join Facebook for free, advertisers do pay for the service. In fact, the company’s main source of income comes from advertisers, and regulators are concerned these consumers may be paying inflated prices for sub-standard products and fraudulent ad performance.

Second, while there is no monetary cost to join Facebook, users pay a steep price in data for access to the platform. Data creates immense value to Facebook’s advertising revenue streams from its users’ contributions.

Third, Facebook’s growth strategy has been to use its market power aggressively to buy or bury competitors, meaning that both end-users and advertisers lose significant opportunities for innovation and choice in the marketplace for social media services.

Facebook under fire in Europe

This isn’t the first time the market powers of big tech firms have come under scrutiny. Even before citing its “own policies” to block the U.S. president’s account, Facebook has repeatedly clashed with governments. There is unease about the company’s power and influence relative to elected, legislated public authorities. In Europe, the company has been fined for misleading regulators about key details of its acquisition of WhatsApp.

Facebook was also called to testify in various government proceedings globally for its role in the genocide of the Rohingya in Myanmar, undermining elections in the U.S. and abroad and for its lax approach to user privacy and privacy law.

Overall, despite being an American-based and founded company, Facebook has been involved in more than 80 governmental hearings around the world since 2016. The efforts have largely been led by Europe, while the U.S. applied a light touch.

Facebook CEO Mark Zuckerberg testifies remotely during a House Judiciary subcommittee on antitrust on Capitol Hill in July 2020 in Washington, D.C. Photo credit: Mandel Ngan/Pool via AP.

When controversy has put Facebook on the defensive on its home turf, the U.S. government has tended to scold rather than punish. American hearings and reports have largely shamed the company for its inattentiveness, while saluting it as an “American success story.”

Until very recently, the U.S. government merely appealed to Facebook’s “greater responsibility” and called for “deep introspection,” hoping that approach would lead to change and quell global calls for stricter government oversight.

Dramatic pivot

American antitrust proceedings against Facebook, therefore, represent a dramatic pivot, one that aligns the U.S. government with the global movement seeking greater public oversight of Big Tech.

There are many possible outcomes of antitrust action against Facebook. There could be a slap on the wrist fine and apology. It’s possible Facebook may be required to break up the company through divesting some of its major assets. Both Instagram and WhatsApp seem like low-hanging fruit.

A radical outcome — if unlikely — could see the mandating of more transparent technical standards, allowing competitors the ability to integrate their platforms with Facebook or users the ability to access their own Facebook networks through a different platform.

For Canada and other countries that have struggled to bring Facebook to the table for years, a successful antitrust case in the U.S. might provide an opportunity to push the tech giant to adhere to privacy laws, for instance.

Whatever happens will be important, but will unfold slowly. In the months and years to come, antitrust hearings will provide a definitive accounting of the power balance between Big Tech and the U.S. government.

If regulators successfully break up Facebook, the U.S. may turn its attention to breaking up other dominant technology firms, like Amazon. A win may also lead to stronger government oversight of the resulting smaller firms on other issues such as disinformation, election advertising and content moderation practices such as the decision to block President Trump’s accounts.

Antitrust Facebook hearings will tell us a great deal about the future of some of our most prominent tech firms, the role of governments in social media and how and where we will communicate in the future.

Tech’s next great opportunity is mid-career workers

Written by Arvind Gupta, University of Toronto, and AJ Tibando, Ryerson University. Photo credit (Unsplash). Originally published in The Conversation.

Mid-career workers have solid business skills valuable to the tech industry.

In the movie The Intern, a 70-year-old Robert De Niro decides to make a career change and lands an internship at an online fashion startup overflowing with young millennials and free food. The running joke in this film is that DeNiro is too “old” to create space for himself in a startup, a world for the “young.”

While De Niro’s character is fictional, the lessons in this film about talent and ageism in the tech sector are quite real.

In displaying the golden goose of characteristics that many of Canada’s tech giants are after — a desire to constantly learn and grow — the analogy of the “aged intern” highlights tech’s next greatest talent pool: the middle-aged or “mid-career” worker.

We’ve spent several decades studying and operating in the skills training and workforce development space. While job transitions have always been an area of challenge for mid-career workers, our research with the Brookfield Institute for Innovation + Entrepreneurship has highlighted the looming impacts of automation in exacerbating that challenge, as well as the inherent opportunity for these workers to be absorbed into the digital economy, an area of high growth desperate for talent.

Shattering the myth

For many years, the idea has persisted among tech companies that in order to be innovative, they must be built by and for young people. Mark Zuckerberg infamously declared that tech companies should think twice before hiring anyone over 30. Now in his mid-30s, he has presumably moved that bar.

However, many tech companies are still made up predominantly of younger workers. Young founders often hire young peers, recent graduates are often paid less, and there are a deeply entrenched ageism and assumptions in the tech world that “older” workers (those over 30) won’t fit into a company’s culture or contribute the same value.

To put it bluntly, this view is short-sighted.

As Canada’s digital economy grows and scrappy startups become larger multinational corporations, they will require many of the same solid business skills that any other company does. Positions in sales, marketing, project and people management all require transferable skills that are often in the greatest demand for larger firms, tech or otherwise. Beyond that, understanding solid business processes that foster scaling are critical and come from years of experience.

This is where we need a new pool of talent for fast-growing Canadian tech companies that is highly experienced, skilled and understands the systems that make a business succeed.

Who are mid-career workers?

Mid-career workers are individuals who have been in the workforce for 10 or more years and who are sitting at the halfway mark in building their careers. This describes the vast majority of the workforce in Canada. They generally have strong business acumen in fostering firm growth and bring a level of maturity and professionalism that comes through hard-earned experience.

As tech companies rapidly grow, they need to hire people who have real-world experience, have worked on and led teams, can build relationships and know how to move products and processes forward. Many such companies regularly say they struggle to find tech workers with these skills.

The true obstacle here, however, may be that tech companies are largely unwilling to accept the suggestion that their best possible hires may neither be young nor from within the tech sector at all.

Many workers will likely soon be looking for their next career move due to rapid advances in automation. Unlike a recession or the shocks to the economy that we are familiar with, automation has the potential to have drastic and permanent impacts on entire sectors.

For mid-career workers in vulnerable sectors, losing a job at one company may well eliminate the option of finding work at another similar firm because automation would have affected jobs there as well.

The likely result will be a growing demographic of top talent looking to break into new industries, including tech. Seizing this opportunity, however, will require Canadian tech firms to adopt some new thinking and a new approach when it comes to retraining and reskilling.

Converting potential into talent

The challenge is to convert the foundation of knowledge and experience of highly skilled mid-career workers into new streams of talent for fast-growing sectors, such as tech, without overlooking the specificities of what it takes to succeed in these sectors.

For example, a senior retail sales manager understands the sales process: how to listen to potential clients, build a sales channel, nurture prospects and close a deal. In the tech space, the product or service will be different and the tools almost certainly state-of-the-art. Although the core skills gained from years of experience will be key to making the transition into a tech firm, doing so will likely require more training.

Now consider the life of a mid-career worker who, with a mortgage and growing family obligations, needs to make this shift as quickly and seamlessly as possible. Less interested in “credentials,” these people will need the digital literacy and technical skills that allow their new employers take them seriously.

Training that is mid-career focused and cross-sectoral does not currently exist at scale. We envision a training approach that is entirely industry-led, designed to operate on the fastest timeline possible and leverages job placements and work-integrated learning opportunities so that these workers are not just skilled, but provided with on-ramps to new careers.

What is needed to accomplish this is a mechanism that rapidly confers new skills to mid-career workers, shifting their talents and potential from high-risk sectors to high-demand sectors.

Our new Canadian initiative, Palette Inc., is attempting to do exactly this. Palette is pioneering a new approach to mid-career retraining by connecting industry, workers and educators to develop new pathways for workers to move from declining industries to growing ones. As automation’s impacts become more present, this mechanism will match employers up with workers that possess the right skills.

For companies willing to look past the obvious yet minor gaps in skills to see potential and talent, great rewards await.

The math behind Trump’s tweets

Written by Anthony Bonato and Lyndsay Roach, Ryerson University. Photo credit AP Photo/Evan Vucci. Originally published in The Conversation.

President Donald Trump delivers a lot of information through Twitter. Here he speaks in the Oval Office of the White House, March 2018.

United States President Donald Trump has a preoccupation with Twitter. Since his account @realDonaldTrump became active in March 2009, it has amassed 53.2 million followers, making it the 18th most popular account on the social media site.

While Trump has tweeted more than 38,000 times, his tweets during and after the 2016 presidential election made his Twitter account a lightning rod for the media and the public. Major news outlets like CNN, CBC, and BBC routinely embed tweets from @realDonaldTrump in their online stories. The Daily Show even turned Trump’s tweets into a mock presidential museum.

In a controversial and unparalleled fashion, Trump uses Twitter as a vehicle for his political announcements. On high-impact issues such as the U.S. travel ban, transgender military recruits and immigration, to name a few, Trump used Twitter to communicate policy decisions.

Alec Baldwin on ‘Saturday Night Live’ in a 2016 sketch on how Trump, then the president-elect, couldn’t stop tweeting. Photo credit NBC.

Given the volume of Trump’s tweets and their potential political relevance, we thought it would be revealing and novel to use mathematical methods to analyze the web of interactions formed by his most frequently used keywords.

Network analysis

One of our primary goals was to uncover communities, which represent groupings of thematically related keywords. We formed co-occurrence networks based on Trump’s tweets, where nodes are keywords, and form links between two keywords if they appear in the same tweet. For example, if the keywords “bad” and “media” appear in the same tweet, they receive a link.

Using an online archive of the president’s tweets on GitHub, we extracted the top 100 keywords from Trump’s Twitter account from each of the last four years. We removed retweets and common words like “it” and “the.”

Some nodes were combined if the keyword was made up of two words; for example, “white” and “house” became “white house;” others such as “e-mail” and “e-mails” were kept separate because Trump used them in different contexts. Labels containing more than one word without spaces are hashtags that frequently appear in the tweets.

We visualized networks of keywords in @realDonaldTrump using the open source software Gephi with the ForceAtlas2 layout algorithm. Communities are groups of nodes that are more likely linked to each other than to other nodes in the network. Gephi uses the Louvain method on network modularity to identify communities, where modularity measures the strength of the division into communities. The Louvain method is an algorithm that optimizes the modularity of a network, so the higher the modularity, the better the division into communities.

The communities were uncovered as a byproduct of the overall network structure, and not by any manual manipulation on our parts. The Gephi software randomly assigned colours to each community: keywords with the same colour are thematically related.

Visualizations

The following network visualizations represent keywords from Trump’s Twitter account taken in 2015 and 2016, leading up to the 2016 U.S. presidential election. Links and nodes were resized based on their relative frequency.

The keyword network from Trump’s 2015 tweets.

In the 2015 network, the two nodes with the most links are “trump” and “realdonaldtrump,” which both appear in the purple community. The likely reason why Trump’s name came up so often as a keyword in 2015 was that he was campaigning for the Republican primary, and his tweets often included compliments made about or by him.

The purple community containing “cruz,” “rubio,” and “carson,” and the green community containing “kasich” and “bush” correspond to his Republican primary opponents.

In the 2016 network, the communities reflect his race against the Democratic nominee Hillary Clinton. The purple community appears to focus on Clinton and the Democratic Party, containing “crooked,” “fbi,” “emails,” and his hashtag “draintheswamp.”

The keyword network from Trump’s 2016 tweets.

In the orange community, there are keywords “rally,” “new hampshire” and “michigan,” along with his hashtag “makeamericagreatagain.” In the blue community, we observe the swing states “ohio” and “florida,” and his shortened hashtag “maga” that stands for “Make America Great Again.”

Next we looked at the 2017 and 2018 networks which correspond to the first and second years of Trump’s presidency.

In the 2017 network, the blue community corresponds to Trump’s dislike of the media, and it contains “fake,” “news,” “cnn,” “bad” and “media.” The orange community contains “hillary clinton,” “fbi” and “crooked.

The keyword network from Trump’s 2017 tweets.

The green community corresponds to domestic policy issues such as “healthcare,” “economy,” “jobs,” “tax,” “reform,” and “cut,” while the purple community has a cluster related to foreign policy issues such as “security,” “china,” and “north korea.”

The keyword network from Trump’s 2018 tweets.

In the 2018 network, communities emerged related to trade (in orange) and borders and immigration (in purple). Trump’s focus on the media and Clinton continues unabated and moves into the blue community. He frequently tweeted about “tax,” “cuts,” and “jobs” in the green community.

Five communities revealed

While’s Trump’s words spoken in the traditional media may at times appear unpredictable, our analysis suggests a long-term trend with his tweets.

Considering that Trump tweets on average ten times a day and on a range of issues, it is remarkable that in each of the four years, his Twitter networks consistently split up into precisely five communities. In other words, by accident or design, his tweets tend to focus on five broad topics each year since 2015. Some of the issues morph over time, and this is evident from before and after his presidency.

The content in the communities sometimes beg further questions. For example, in the 2018 network, the green community contains the keywords “russia,” “comey,” and “collusion.” These refer to the ongoing Russia investigation. The green community, however, also includes “crooked” and “hillary,” and we leave it to pundits to explain how all these keywords are related.

Our take is that by repeating keywords together, his sizable Twitter audience will view them as more likely linked in real life.

Trump is unlikely to stop or even reduce his tweeting anytime soon. Twitter represents a vital aspect of Trump’s media engagement.

Our analysis used network science to map out Trump’s keywords on Twitter and their interactions over the timescale of years. From this approach, we obtain a historical view of the topics that matter to him. A potential future research plan would be to map Trump’s Twitter networks over shorter time periods such as months, weeks or even days.

Every politician and public figure on Twitter have associated with them an evolving web of keywords. These networks are not always evident in our break-neck 24-hour news cycle, and our approach holds the potential to make these hidden networks more visible. We need only to look to network science to uncover them.

Why e-sports should not be in the Olympics

Written by Nicole W. Forrester. Photo credit Robert Paul/Blizzard Entertainment. Originally published in The Conversation.

Jong Seok Kim, a player for the London Spitfires team in the Overwatch League, which gets primetime coverage on ESPN. Will e-sports soon be part of the Olympics?

The International Olympic Committee and the Global Association of International Sports Federations recently hosted an e-sports forum to explore shared similarities, possible partnership and the looming question of whether video gaming could be recognized as an Olympic event.

Ever since the 2024 Summer Olympics in Paris first expressed interest of possibly adding electronic sports to the Olympic Games program, we’ve seen a growing interest by the IOC in e-sports — traditionally defined as any “organized video game competitions.

Recognizing the growing interest in e-sports, the organizing committee of the 2024 Summer Olympics in Paris said: “The youth are interested, let’s meet them.”

As an Olympian and former world class high jumper, I struggle with the notion of e-sports becoming an Olympic sport. I am not alone. Conversations I’ve had with other Olympians reveal concerns about comparing the physical skill and demands of traditional athletic competition with e-sports.

Given the IOC’s advocacy role for physical activity, e-sports seems to be a conflict with its push for an active society.

In an interview with Inside the Games, Sarah Walker, an IOC Athletes’ Commission member and three-time world champion in BMX, explained her opposition.

“If I want to practise any Olympic discipline, if I wanted to try one of them, I actually have to go out and do it. I have to be active. Where gaming is right now, if I was inspired to be a gamer, my first step is to go home and sit on the couch.”

Most Olympians recognize that those who participate in e-sports spend a great deal of time training — even working with nutritionists and sport psychologists to improve their prowess. But is that is that enough to join the Olympic Games family?

Thomas Bach, president of the International Olympic Committee, attends an e-sport forum held at the Olympic Museum in Lausanne, Switzerland in July 2018. Photo credit Greg Martin/International Olympic Committee.

$1 billion market

Given the growth in popularity, it’s understandable why the IOC would want to partner with e-sports. The IOC generates more than 90 per cent of its revenue from broadcast and sponsorship. Partnering with e-sports, where revenue is generated mostly through sponsorship but where more money is coming from broadcasting, could be complementary and attractive.

The marketing firm Newzoo estimated last year that with brand investment growing by 48 per cent, the global e-sports economy will reach almost $1 billion in 2018.

ESPN provides in-depth analysis and coverage with a digital vertical platform on e-sports and the network recently announced an exclusive multi-year agreement with Blizzard Entertainment for live television coverage of the professional e-sport Overwatch League, with the finals airing in prime time.

Is e-sport a sport?

Still, the question remains, is e-sports — “organized video game competitions” — actually a sport?

To answer this question, perhaps we need to revisit the academic definition of sport. While differences may exist in their granular descriptions of sport, researchers appear to converge on three central attributes: The sport involves a physical component, it is competitive, and it is institutionalized, meaning a governing body establishes the rules of performance.

While e-sports can be argued to be competitive and institutionalized, the first criteria of physicality is where it falls short.

Some have argued the fine motor movements that are required with the hand-held controller by e-sport players fulfils this criterion. However, the same could be said about various table top games.

A 2016 study in Quest, the journal of the National Association for Kinesiology in Higher Education, used the block-building game Jenga to illustrate this point. Jenga requires precision and dexterity as each player must to remove one block from the bottom and delicately place the block on top without disturbing the structure. There is even a Jenga World Championship. Perhaps then Jenga should also be considered an Olympic sport.

Since the modern Olympics were first held in 1896, the number of participating sports has grown over the years. The first Games had just nine sports — athletics (track and field), cycling, fencing, gymnastics, shooting, swimming, tennis, weightlifting and wrestling. At the 2016 Summer Olympics in Rio, a total of 28 sports were contested. Five more will be added for 2020 Games in Tokyo Games.

Participants at the e-sports forum held at the Olympic Museum in Lausanne, Switzerland. Photo credit Christophe Moratal/International Olympic Committee.

The first step for a sport to be included in the Olympic Games program requires being recognized by the IOC. In this process, the sport must have overarching international federation (IF) that will govern the sport — enforcing the rules and regulations of the Olympic Movement, which includes drug testing. (It is also possible for a sport to be recognized as an Olympic sport and never participate in the Games, as is the case for chess, bowling and powerboating.)

Once recognized, the sport’s IF can apply for admittance into the Olympic program as a sport, a discipline or an event. For example, the women’s steeplechase was added to the 2008 Olympic Games as an event within the sport of athletics.

More sports added

An Organising Committee of an Olympic Games (OCOG) can also propose the inclusion of an event. Most recently, the IOC allowed the addition of karate, surfing, sports climbing and baseball/softball to the Olympic program in Tokyo 2020.

Paris 2024 had indicated an interest in including e-sports on its program, but the IOC has said it won’t be eligible by the time the schedule is set in 2020. Still, IOC President Thomas Bach said at the recent e-sports forum that the meeting was a “first step of a long journey” to what could lead to Olympic recognition.

A male-dominated activity

Central to the Olympic Movement and nestled within the criteria of accepting a new sport is gender equality. Interestingly, this has been an area in which e-sports has been heavily criticized.

A study that reviewed gender and gaming determined that even though there are approximately equal numbers of males and females who play video games, most professional gamers are male. Moreover, female players who achieve some level of success are marginalized. Researchers concluded the “video game culture is actively hostile towards women in the private as well as the professional spheres.”

Within the gaming community, it is not a surprise for female players to be harassed.

One notable case involved Miranda Pakozdi, who was sexually harassed for 13 minutes on the live internet program “Cross Assault.” The portrayal of females in e-sports should also concern the IOC. Women are usually depicted as highly sexually and as victims instead of heroines.

Many Olympians, including me, feel it’s inevitable that e-sports will one day join the Olympic family. Still, one can only wonder if Pierre de Coubertin, the father of the modern Games, would question whether the values of the Olympic Movement are being compromised for the financial enticements that e-sports promise.

Big Brother facial recognition needs ethical regulations

Written by William Michael Carter. Photo credit Shutterstock. Originally published in The Conversation.

Will facial recognition software make the world a safer place, as tech firms are claiming, or will it make the marginalized more vulnerable and monitored?

My mother always said I had a face for radio. Thank God, as radio may be the last place in this technology-enhanced world where your face won’t determine your social status or potential to commit a crime.

RealNetworks, the global leader of a technology that enables the seamless digital delivery of audio and video files across the internet, has just released its latest computer vision: A machine learning software package. The hope is that this new software will detect, and potentially predict, suspicious behaviour through facial recognition.

Called SAFR (Secure, Accurate Facial Recognition), the toolset has been marketed as a cost-effective way to smoothly blend into existing CCTV video monitoring systems. It will be able to “detect and match millions of faces in real time,” specifically within school environments.

Ostensibly, RealNetworks sees its technology as something that can make the world safer. The catchy branding, however, masks the real ethical issues surrounding the deployment of facial detection systems. Some of those issues include questions about the inherent biases embedded within the code and, ultimately, how that captured data is used.

The Chinese model

Big Brother is watching. No other country in the world has more video surveillance than China. With 170 million CCTV cameras and some 400 million new ones being installed, it is a country that has adopted and deployed facial recognition in an Orwellian fashion.

In the near future, its citizens, and those of us who travel there, will be exposed to a vast and integrated network of facial recognition systems monitoring everything from the use of public transportation, to speeding to how much toilet paper one uses in the public toilet.

In this photo from March 2017, visitors to the toilet at the Temple of Heaven park try out a facial recognition toilet paper dispenser in Beijing, China. At the 600-year-old Temple of Heaven, administrators recognized the need to stock the public bathrooms with toilet paper, a requirement for obtaining a top rating from the National Tourism Authority. But they needed a means of preventing patrons from stripping them bare for personal use – hence the introduction of new technology that dispenses just one 60-centimeter (2-foot) section of paper every nine minutes following a face scan. Photo credit AP Photo/Ng Han Guan.

The most disturbing element so far is the recent introduction of facial recognition to monitor school children’s behaviour within Chinese public schools.

As part of China’s full integration of their equally Orwellian social credit system — an incentive program that rewards each citizen’s commitment to the state’s dictated morals — this fully integrated digital system will automatically identify a person. It can then determine one’s ability to progress in society — and by extension that person’s immediate family’s economic and social status — by monitoring the state’s non-sanctioned behaviour.

In essence, facial recognition is making it impossible for those exposed to have the luxury of having a bad day.

Facial recognition systems now being deployed within Chinese schools are monitoring everything from classroom attendance to whether a child is daydreaming or paying attention. It is a full-on monitoring system that determines, to a large extent, a child’s future without considering that some qualities, such as abstract thought, can’t be easily detected or at best, looked upon favourably, with facial recognition.

It also raises some very uncomfortable notions of ethics or the lack thereof, especially towards more vulnerable members of society.

Need for public regulation

RealNetworks launch of SAFR comes hot on the heels of Microsoft president Brad Smith’s impassioned manifesto on the need for public regulation and corporate responsibility in the development and deployment of facial recognition technology.

Smith rightly pointed out that facial recognition tools are still somewhat skewed and have “greater error rates for women and people of colour.” This problem is twofold, with an acknowledgement that the people who code may unconsciously embed cultural biases.

The data sets currently available may lack the objective robustness required to ensure that people’s faces aren’t being misidentified, or even worse, predetermined through encoded bias as is now beginning to happen in the Chinese school system.

In an effort to address this and myriad other related issues, Microsoft established an AI and Ethics in Engineering and Research (AETHER) Committee. This committee is also set up to help them comply with the European Union’s newly enforced General Data Protection Regulation (GDPR) and its eventual future adoption, in some form, in North America.

Smith’s ardent appeal rightly queries the current and future intended use and deployment of facial recognition systems, yet fails to address how Microsoft or, by extension, other AI technology leaders, can eliminate biases within their base code or data sets from the onset.

Minority report

The features of our face are hardly more than gestures which force of habit has made permanent. — Marcel Proust, 1919

Like many technologies, Pandora has already left the box. If you own a smart phone and use the internet, you have already opted out of any basic notions of personal anonymity within Western society.

With GDPR now fully engaged in Europe, visiting a website now requires you to “opt in” to the possibility that that website might be collecting personal data. Facial recognition systems have no means of following GDPR rules, so as such, we as society are automatically “opted-in” and thus completely at the mercy of how our faces are being recorded, processed and stored by governmental, corporate or even privately deployed CCTV systems.

Facial recognition trials held in England by the London Metropolitan Police have consistently yielded a 98 per cent failure rate. Similarly, in South West Wales, tests have done only slightly better with less than 10 per cent success.

A computer with an automatic facial recognition system shows Thomas de Maiziere, the former German minister of interior, center right, as he visited the Suedkreuz train station in Berlin, Friday, Dec. 15, 2017. At the train station, German authorities test automatic facial recognition technologies. Photo credit AP Photo/Markus Schreiberl.

Conversely, University of California, Berkeley, scientists have concluded that substantive facial variation is an evolutionary trait unique to humans. So where is the disconnect?

If as Marcel Proust has suggested, our lives and thus our personalities are uniquely identifiable by our faces, why can’t facial recognition systems not easily return positive results?

The answer goes back to how computer programming is written and the data sets used by that code to return a positive match. Inevitably, code is written to support an idealized notion of facial type.

As such, outlying variations like naturally occurring facial deformities or facial features affected by physical or mental trauma represent only a small fraction of the infinite possible facial variations in the world. The data sets assume we are homogeneous doppelgängers of each other, without addressing the micro-variations of peoples faces.

If that’s the case, we are all subject to the possibility that our faces as interpreted by the ever-increasing deployment of immature facial recognition systems will betray the reality of who we are.