Canada needs to consider the user experience of migrants when designing programs that impact them

Written by Lucia Nalbandian, University of Toronto and Nick Dreher, Toronto Metropolitan University. Photo credit THE CANADIAN PRESS/Nathan Denette. Originally published in The Conversation.

People walk through Pearson International Airport in Toronto in March 2020.

The first interaction many Canadians have with government services today is digital. Older Canadians turn to the internet to understand how to file for Old Age Security or track down a customer service phone number. Parents visit school district websites for information on school closures, schedules and curricula.

These digital offerings present an opportunity to enhance the quality of services and improve citizens’ experiences by taking a human-centred design approach.

Our research has revealed that governments across the globe are increasingly leveraging technology in immigration and integration processes. As Canadian government services focus on improving the experience of their citizens, efforts should be extended to future citizens as well.

Immigrants are a vital part of the Canadian economy and social fabric. In announcing Canada’s new immigration target of 500,000 permanent residents per year by 2025, Immigration Minister Sean Fraser said the numbers strike a balance between our economic needs and international obligations.

Bar graph showing the increasing amounts of immigrants Canada plans to welcome into the country over the years.
Canada’s new Immigration Levels Plan aims to welcome 465,000 new permanent residents in 2023, 485,000 in 2024 and 500,000 in 2025. Photo credit: (Statistics Canada), Author provided.

Despite the importance of immigrants for the Canadian economy and national identity, it remains to be seen if immigrants are engaged in the development of policies, services and technological tools that impact them.

Advancements in the immigration sector

Canada has steadily been introducing digital technologies into services, programs and processes that impact migrants. This has especially been the case with the COVID-19 pandemic requiring organizations to innovate their services and programs without diminishing the overall quality of service.

Immigration scholar Maggie Perzyna developed the COVID-19 Immigration Policy Tracker to examine how the Immigration and Refugee Board and Immigration, Refugees and Citizenship Canada (IRCC) used digital tools to enable employees to work from home. This helped reduce the administrative burden and increased efficiency.

A grid of people raise their hands during a virtual citizenship ceremony as seen on a mobile phone screen.
Participants raise their hands as they swear the oath to become Canadian citizens during a virtual citizenship ceremony held over livestream due to the COVID-19 pandemic, on July 1, 2020. Photo credit: THE CANADIAN PRESS/Justin Tang.

There continues to be a strong case for technological transformation in Canada’s immigration-focused departments, programs and services. As of July 31, 2022, Canada has a backlog of 2.4 million immigration applications.

In other words, Canada is failing to meet the application processing timelines it has set for itself for services —including passport renewal, refugee travel documents, and work and study permits.

While the Canadian government is trying to address these backlogs, there appears to be no discussion of asking immigrants about their journey through the application process. Rather, the government appears to centre employees. Still, issues with the Global Case Management System persist, driving current case management system issues for employees and government service users.

Virtual processes were prioritized

While the Canadian government previously introduced a machine learning pilot tool to sort through temporary resident visa applications, its use stagnated due to pandemic-related border closures. Instead, virtual processes and digitization were prioritized, including:

  • Shifting from paper to digitized applications for the following: spousal and economic class immigration, applicants to the Non-Express Entry Provincial Nominee Program, the Rural and Northern Immigration Pilot, the Agri-Food Pilot, the Atlantic Immigration Pilot, the Québec Selected Investor Program, the Québec Entrepreneur Program, the Québec Self-Employed Persons Program and protected persons.
  • Hosting digital hearings for spousal immigration applicants, pre-removal risk assessments and refugees using video-conferencing.
  • Introducing secure document exchanges and the ability to view case information across various immigration streams.
  • Offering virtual citizenship tests and citizenship ceremonies.

Additionally, Fraser announced further measures to improve the user experience, modernize the immigration system and address challenges faced by people using IRCC. This strategy will also involve using data analytics to aid officers in sorting and processing visitor visa applications.

A man in a suit and tie speaking into a microphone. Behind him stands a line of Canadian flags.
Immigration, Refugees and Citizenship Minister Sean Fraser speaks during a news conference with UN High Commissioner for Refugees Filippo Grandi in April 2022 in Ottawa. Photo credit: THE CANADIAN PRESS/Adrian Wyld.

These changes showcase Canada’s efforts to pair existing challenges with existing solutions — initiatives that require relatively low effort. Yet while these digitization efforts streamline administrative processes and reduce administrative burden, application backlogs persist.

While useful, these initiatives focus on efficiencies in IRCC processing for employees. As IRCC evaluates and develops processes, it should prioritize the experience of end users by taking a migrant-centred design approach.

The value of human-centred design

Human-centred design is the practice of putting real people at the centre of any development process, including for programs, policies or technology. It places the end user at the forefront of development so user needs and preferences are considered each step of the way.

To maximize value in technology implementation, IRCC should take a migrant-centred design approach: apply human-centred design principles with migrants treated as the end users. This approach should consider the following suggestions:

  1. Centre migrants in the development of immigration programs, policies and services, and digital and technological tools by seeking migrants’ input in forthcoming and proposed changes.
  2. Take a life-course approach to human and social service delivery by recognizing that “all stages of a person’s life are intricately intertwined with each other.” Government services prioritize Canadian citizens, but a life-course approach understands that all individuals in Canada, regardless of their current immigration status, represent potential Canadian citizens. Government services should implement a life-course approach by prioritizing quality services for migrants, some of which are seniors that will require different government services in the future.
  3. Combat discrimination and bias in developing new immigration technology tools. Artificial intelligence and other advanced digital technologies have the capacity to reproduce biases and discrimination that currently exist in IRCC. Any new technologies must be evaluated to prevent discrimination or bias.

As Canada continues to explore how technology can help streamline and improve the migrant journey, migrant-centred design should be at the forefront of their planning. When we design processes, policies and tools with intended users at the centre, they are more likely to resonate with users.

If Canada wants to be a first-choice migration destination, we need to approach immigration policies — including technology use — as opportunities to empower and encourage migrants.

The metaverse offers challenges and possibilities for the future of the retail industry

Written by Omar H. Fares, Toronto Metropolitan University. Photo credit Shutterstock. Originally published in The Conversation

As technology improves, the potential for retailers to make use of the metaverse will grow.

In 1968, American computer scientist Ivan Sutherland predicted the future of augmented and virtual reality with his concept of the “Ultimate Display. The Ultimate Display relied on the kinetic depth effect to create two dimensional images that moved with its users, giving the illusion of a three-dimensional display.

While the concept of virtual reality only focuses on the creation of three-dimensional environments, the metaverse — a term coined by Neal Stephenson in his 1992 book Snow Crash — is a much broader concept that surpasses this.

While no official definition of the metaverse truly exists, science and technology reporter Matthew Sparkes provides a decent one. He defines the metaverse as “a shared online space that incorporates 3D graphics, either on a screen or in virtual reality.”

Since the term was coined, the idea of the metaverse has remained more of a fictional concept than a scientific one. However, with technological advancements in recent years, the metaverse has become more tangible. Much of the recent hype happened after Mark Zuckerberg made the announcement to rename the Facebook brand to Meta. Many retailers have since jumped aboard the metaverse train.

A white man in a black long-sleeved shirt gestures while speaking.
Meta chief executive Mark Zuckerberg announced Facebook’s name change to Meta in 2021. He said the move reflected the company’s interest in broader technological ideas, like the metaverse. Photo credit Nick Wass/AP Photo.

Nike recently filed multiple trademarks allowing them to create and sell Nike shoes and apparel virtually. JP Morgan opened their first virtual bank branch. Samsung recreated their New York City flagship store in the virtual browser-based platform Decentraland, where they are launching new products and creating events.

While many retailers are capitalizing on the metaverse early, there is still uncertainty about whether the metaverse really is the future of retailing or whether it will be a short-lived fad.

Dispelling metaverse myths

Much of that uncertainty around the metaverse stems from confusion about the technology. While examining the top keyword associations related to the metaverse on Google Trends, I found “what is metaverse” and “metaverse meaning” to be the top phrases customers searched for. To alleviate some of this confusion, it’s important to dispel commonly held myths about the metaverse.

Myth 1: You need a VR headset to access the metaverse

While an optimal experience in the metaverse can be achieved through VR headsets, anyone can access the metaverse through their personal computers. For instance, customers can create their avatars and access the metaverse in Decentraland on screen without a VR headset.

A virtual avatar in a green shirt, black pants, and sneakers standing in a virtual world.
My virtual avatar in Decentraland. Photo credit: (Decentraland Foundation), author provided.

Myth 2: The metaverse will replace real-life interactions

Rather than replacing existing modes of communication, the Metaverse provides a more interactive mode of communication. New technologies always bring about predictions of the end of physical interactions. It’s helpful to compare the metaverse with the rise of smartphones. Smartphones enhance communication by allowing people to interact with their social networks, but have not entirely replaced face-to-face interactions. The metaverse will be the same.

Myth 3: The metaverse is just for gaming

While gaming remains the dominant driver of user involvement with the metaverse (97 per cent of gaming executives believe that gaming is the centre of the metaverse today), it’s not the only activity people can take part in.

In a recent survey, McKinsey & Company asked customers what their preferred activity on the metaverse would be in the next five years. Shopping virtually ranked the highest, followed by attending telehealth appointments and virtual synchronous courses.

Keeping expectations realistic

In its current form, the metaverse lacks the technological infrastructure to deliver on market expectations. It may be appropriate to compare the metaverse with the dot-com bubble between 1995 and 2000 that was caused by speculation in internet-based businesses.

Similarly, there appears to be tremendous hype and expectations around what the technology can deliver in its current form. A recent survey of 1,500 consumers found that 51 per cent of people expect customer service to be better in the metaverse, 32 per cent expect less frustration and anxiety while dealing with customer service agents in the metaverse compared to phone interactions, and 27 per cent expect interactions with metaverse virtual avatar assistants to be more effective than online chat-bots.

While such expectations can appear reasonable, metaverse technology is still in its infancy stage, where the focus remains on developing infrastructure and processes for the future. The unrealistic expectations may potentially lead to a metaverse bubble as reality struggles to meet expectations.

Challenges for retailers

As with any emerging technology, retailers need to be prepared for challenges posed by the metaverse. Some of these challenges include the following:

  • Data security and privacy: With the novelty of metaverse technology and the wealth of personal data collected, the metaverse will be an attractive target for cyber-hackers. New approaches and methods need to be considered for a safe metaverse that customers can trust.
  • Experienced talent: Having the right talent that can create, manage and support experiences in the metaverse needs to be at the forefront of engaging with the technology. However, due to the novelty of the technology, finding such talent will be a challenge.
  • Regulations: With no clear jurisdictions and regulations in place, the safety of virtual spaces in the metaverse may be compromised and end up pushing customers away. Retailers need to ensure these spaces are safe and protected.
  • Managing customers’ expectations: Retailers need to educate their customers about what can currently be done in the metaverse, and what customers should expect from businesses in the metaverse.

Despite these challenges, retailers will still be able to craft novel shopping experiences in the metaverse — it will just require appropriately skilled and qualified people to make it happen. With appropriate planning and preparation, retailers will be able to meet these challenges head-on.

A woman wearing a VR headset standing in a shopping mall.
The metaverse will have the potential to revolutionize the retail industry once the technology is advanced enough. Photo credit Shutterstock.

Opportunities for retailers

As technology improves, the potential uses of the metaverse for retailers will grow. At the moment, the metaverse offers retailers three key opportunities for improving the online shopping experience.

The first is brand exposure. Retailers can expand their presence through virtual billboards and interactive advertisements with less noise, compared to existing online and mobile channels. Cloud Nine, an IT services company, is one of the earliest companies to advertise their services on virtual billboards in Decentraland. Virtual billboard advertising is something marketers should keep in mind.

Secondly, the metaverse offers unique experiences for customers to engage with brands through events, contests, and game-like features. Such experiences could increase loyalty and brand engagement. The Metaverse Fashion Week is an example of how retailers can create unique brand engagement opportunities. Retailers including Tommy Hilfiger, Perry Ellis and Dolce & Gabbana all participated in the pilot experience, leading the wave for immersive and unique customer-brand interactions.

Lastly, the metaverse provides retailers the chance to personalize customer experiences. Similar to how retailers can customize customers’ online experiences through data collection, retailers can tailor customer experiences in the virtual environment. In Meta’s Horizon Worlds, for example, users can create their own virtual worlds, invite friends and customize their own experiences.

Elon Musk’s Twitter Blue fiasco: Governments need to better regulate how companies use trademarks

Written by Alexandra Mogyoros, Toronto Metropolitan University. Photo credit Shutterstock. Originally published in The Conversation.

We often trust corporate logos and symbols without necessarily understanding the legal statutes that govern them. 

Until recently, Twitter’s blue checkmark logo was (for better or worse) a trusted mark of authenticity. But under the façade of democratizing the platform, Elon Musk allowed the blue checkmark to be purchased by anyone — with unsurprisingly chaotic results.

Impersonators soon made use of the blue checkmark, with negative consequences for those brands, companies and public figures who had their Twitter accounts impersonated.

The twitter verification symbol: a blue circle with a white tick in the middle.
Twitter’s blue checkmark informs users that accounts are verified as authentic. The sale of verifications led to widespread impersonations on Twitter. Photo credit: Shutterstock.

After a Twitter Blue account impersonated pharmaceutical company Eli Lilly and announced that insulin would be free, the pharmaceutical giant lost over US $15 billion in market cap. This shines a light on a greater problem in our society: how we trust logos without necessarily understanding the standards or quality behind them.

Musk’s Twitter Blue campaign capitalized on users’ trust and profited from it. The decision highlights the larger problem of consumers relying on logos that appear to be trustworthy, but really provide little to no substantiation. It calls out for better regulation of how social media platforms manage misinformation and disinformation.

Until recently, Twitter’s blue checkmark logo was (for better or worse) a trusted mark of authenticity. But under the façade of democratizing the platform, Elon Musk allowed the blue checkmark to be purchased by anyone — with unsurprisingly chaotic results.

Impersonators soon made use of the blue checkmark, with negative consequences for those brands, companies and public figures who had their Twitter accounts impersonated.

Logos communicate information that consumers trust

Logos are used not just to signal the brand behind a product (like Nike’s swoosh or Starbucks’ siren), they also tell us things about a product, like whether it is certified vegan or gluten free. We don’t necessarily understand what logos, like verification badges or Cineplex’s VENUESAFE logo, are claiming to certify or how, yet we trust them to mean a certain level of safety and authenticity. This trust comes in many forms and can be earned — or acquired — in lots of ways.

Sometimes we trust a logo simply because it uses aesthetic attributes that implicitly signal trustworthiness to us: for example, they might appear like a seal, or checkmark, or make use of words like “verified,” “certified” or “guaranteed.”

Sometimes we trust them because they have websites that explain to us in clear terms the exclusivity of being able to use the mark. Other times, we simply trust the brand or platform making use of those logos and let their goodwill transfer to the symbol in question.

Elon Musk allowing verification badges to simply be bought by anyone is an example of how powerful and misguided trust in logos can be. When people see a logo that seems to verify something, they often make assumptions both about what quality is being promised and the legitimacy of that promise.

From verification badges to loyalty checkmarks

Musk purported to be irked by the “exclusivity” of verification marks. Twitter’s previous verification program began to affirm the identity of some Twitter users in response to problems with impersonation.

He tweeted: “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit. Power to the people! Blue for $8/month.”

But, “verification to anyone willing to pay for it ignores the reasons the existing system was put in place and potentially undermines the overall trust in Twitter that it’s supposed to provide.” Allowing users to buy the blue checkmark logo undermined the trustworthiness it had earned. The same logo suddenly signalled two very different kinds of information and caused confusion.

It didn’t matter that Musk had announced that the blue checkmark’s meaning had been effectively corrupted. Information being available to consumers isn’t always a cure-all in the face of reliance and trust.

Not all logos are regulated equally

The use of these symbols is regulated to varying degrees. While consumer protection prevents us from being outright lied to or misled, these marks are insidious. They don’t necessarily guarantee us anything, yet they command our trust through the implicit standards promised.

Our legal system does not provide substantive oversight into these checkmarks, nor does it adequately recognize the role trust plays in consumers reliance of them. This can cause problems. Two weeks ago, it caused problems for Eli Lilly.

Previously, it has caused problems for those communities for whom empty certification marks promise to help, but does not. It also causes problems for the consumers who trust an ultimately untrustworthy source.

The Twitter symbol on the company's headquarters in San Francisco.
The Twitter symbol on the company’s headquarters in San Francisco. Corporate brands are protected through trademark laws irrespective of how companies behave. Maybe it is time we reconsider that. Photo credit: Jeff Chiu/AP Photo

Logos are essential to brand identity and are extraordinarily valuable assets to their corporate owners. Consequently, brands do not take kindly to having their ability to use their logos limited. Our legal system needs to do better and govern logos through trademark law in a way that more realistically reflects the role they play.

Brands need to be held accountable. They are protected through trademark laws irrespective of how they behave. Maybe it is time we reconsider that.

Musk turning Twitter’s verification badge into a subscription service was wrong, and likely strategically motivated. Musk has recently announced that the Blue Verified will now be re-launched on Nov. 29 to ensure it is “rock solid.”

At the end of the day, the blue checkmark will only be as trustworthy as the brand that stands behind it. Right now, that brand is Elon Musk.

What is the metaverse, and what can we do there?

Written by Adrian Ma, Toronto Metropolitan University. Photo credit: Shutterstock. Originally published in The Conversation.

What will it take for the metaverse to live up to its potential?  

You’ve likely heard recently how the metaverse will usher in a new era of digital connectivity, virtual reality (VR) experiences and e-commerce. Tech companies are betting big on it: Microsoft’s massive US$68.7 billion acquisition of game developing giant Activision Blizzard reflected the company’s desire to bolster its position in the interactive entertainment space.

Prior to this, Facebook’s parent company rebranded itself as Meta — a key pillar of founder Mark Zuckerberg’s grand ambitions to reimagine the social media platform as “a metaverse company, building the future of social connection.”

But other non-tech corporations are clamouring to get in on the ground floor as well, from Nike filing new trademarks to sell virtual Air Jordans and Walmart preparing to offer virtual merchandise in online stores using its own cryptocurrency and non-fungible tokens (NFTs).

As a journalism professor who has been researching the future of immersive media, I agree the metaverse opens up transformative opportunities. But I also see inherent challenges in its road to mainstream adoption. So what exactly is the metaverse and why is it being hyped up as a game-changing innovation?

Entering the metaverse

The metaverse is “an integrated network of 3D virtual worlds.” These worlds are accessed through a virtual reality headset — users navigate the metaverse using their eye movements, feedback controllers or voice commands. The headset immerses the user, stimulating what is known as presence, which is created by generating the physical sensation of actually being there.

To see the metaverse in action, we can look at popular massively multiplayer virtual reality games such as Rec Room or Horizon Worlds, where participants use avatars to interact with each other and manipulate their environment.

But the wider applications beyond gaming are staggering. Musicians and entertainment labels are experimenting with hosting concerts in the metaverse. The sports industry is following suit, with top franchises like Manchester City building virtual stadiums so fans can watch games and, presumably, purchase virtual merchandise.

Perhaps the farthest reaching opportunities for the metaverse will be in online learning and government services.

children using laptops sit at a table with a digital dinosaur hologram in the middle.
The metaverse contains exciting new applications for education at all levels. Photo credit Shutterstock.

This is the popular conception of the metaverse: a VR-based world independent of our physical one where people can socialize and engage in a seemingly unlimited variety of virtual experiences, all supported with its own digital economy.

More than virtual reality

But there are challenges to overcome before the metaverse can achieve widespread, global adoption. And one key challenge is the “virtual” part of this universe.

While VR is considered a key ingredient of the metaverse recipe, entrance to the metaverse is not (and should not) be limited to having a VR headset. In a sense, anyone with a computer or smartphone can tap into a metaverse experience, such as the digital world of Second Life. Offering broad accessibility is key to making the metaverse work based on VR’s continued uphill battle to gain traction with consumers.

The VR market has seen remarkable innovations in a short period of time. A few years ago, people interested in home VR had to choose between expensive computer-based systems that tethered the user or low-cost but extremely limited smartphone-based headsets.

Now we’ve seen the arrival of affordable, ultra high-quality, portable wireless headsets like Meta’s Quest line, which has quickly become the market leader in home VR. The graphics are sensational, the content library is more robust than ever, and the device costs less than most video game consoles. So why are so few people using VR?

On one hand, global sales of VR headsets have been growing, with 2021 being a banner year for headset manufacturers, who had their best sales since 2016’s flurry of big-brand VR device releases. But they still only sold around 11 million devices worldwide.

Getting people to even use their devices can be a challenge, as it’s estimated only 28 per cent of people who own VR headsets use them on a daily basis. As numerous tech critics have pointed out, the VR mainstream revolution that has been promised for years has largely failed to come to fruition.

a woman wearing a vr headset with an outstretched hand.
Virtual reality headsets are increasing in popularity, but there are challenges to their widespread adoption. Photo credit Shutterstock.

Virtual movement, physical discomfort

There are a myriad factors, from missed marketing opportunities to manufacturing obstacles, as to why VR hasn’t caught on in a bigger way. But it’s possible that using VR is inherently unappealing for a significant number of people, particularly for frequent use.

Despite impressive advancements in screen technology, VR developers are still trying to address the “cybersickness” — a feeling of nausea akin to motion sickness — their devices elicit in many users.

Studies have found that neck physical discomfort may present another barrier, which may remain an issue as long as VR requires the use of large headsets. There’s also research to suggest that women experience much higher levels of discomfort because the fit of the headset is optimized for men.

And beyond the physical challenges of using VR is the isolating nature of it: “Once you put on the headset, you’re separated from the world around you,” writes Ramona Pringle, a digital technology professor and researcher.

Certainly, some are drawn to VR to experience heightened escapism or to interact with others virtually. But this disconnection to the physical world, and the uneasy feeling of separation from people, may be a significant hurdle in getting people to voluntarily wear a headset for hours at a time.

Mediated, magical worlds everywhere

Augmented reality (AR) experiences may hold the key for the metaverse to reach its true potential. With AR, users use their smartphone (or other device) to digitally enhance what they perceive in the physical world in real-time, allowing them to tap into a virtual world while still feeling present in this one.

An interview with video games researcher and designer Kris Alexander on the potential of augmented reality.

A metaverse centred on augmented reality wouldn’t be a completely new digital world — it would intersect with our real world. It’s this version of the metaverse that could actually have the ability to change the way we live, argues computer scientist and tech writer Louis Rosenberg:

“I believe the vision portrayed by many Metaverse companies of a world filled with cartoonish avatars is misleading. Yes, virtual worlds for socializing will become quite popular, but it will not be the means through which immersive media transforms society. The true Metaverse — the one that becomes the central platform of our lives — will be an augmented world. If we do it right, it will be magical, and it will be everywhere.” 

This federal election, the Liberals are outspending all the other parties combined when buying ads on Facebook

Written by , Ryerson University; , Ryerson University. Photo credit: THE CANADIAN PRESS/Nathan Denette. Originally published in The Conversation.

Liberal Leader Justin Trudeau, right, leaves the stage with MP candidate Chrystia Freeland after revealing his party’s election platform.

Today, 94 per cent of Canadian adults who use the internet have at least one social media account, and 83 per cent report having a Facebook account. This trend will likely continue as more people turn to the internet and social media to stay connected.

The shift in how and where people spend their time and attention has given rise to a widely adopted practice called microtargeting. Microtargeting is a marketing strategy that relies on using users’ demographic and social media data — the things we “like” on social media, who we are friends with, businesses that we have frequented, etc. — to identify and segment people into narrowly defined small groups in order to show them personalized ads.

In recent years, digital political ad spending has exploded. And in this federal election, the Liberal Party of Canada is outspending all of the other major federal parties combined, while the NDP’s political ads are being shown to Facebook users under 18.

Finely tuned machine

As a platform, Facebook is a finely tuned microtargeting machine. It’s one of the main reasons why political campaigns in places like the United States have been “flooding Facebook with ad dollars.”

This trend is also happening here in Canada. Using Facebook’s Ad Library, we found that between between July 31 and Aug. 29, major political parties in Canada had spent nearly $2.5 million across Facebook, Instagram and Messenger. The federal Liberal Party alone spent $1.5 million on 7,038 ads, far outpacing the combined spending by the other major federal parties.

Amount spent on Facebook ads by major political parties in Canada between July 31 and Aug. 29, 2021. Photo credit: Facebook Ad Library Report.

Analyzing the data

As part of the Social Media Lab’s Election 44 transparency and accountability initiative, we have been tracking Canadian political ad spending on Facebook using PoliDashboard, a data visualization tool designed to help voters, journalists and campaign staffers monitor political discourse in Canada. The dashboard is part of our ongoing research on online engagement and the use of social bots to influence public opinion on issues of national importance, like the elections and the ongoing COVID-19 pandemic.

PoliDashboard is publicly accessible and consists of two main modules. The first is the #CNDPoli Twitter Module which provides near real-time analysis of #CDNPoli public tweets, including detecting the presence of bots or automated accounts. The second is the Facebook Political Ads Module, which collects and analyzes data about political advertisers and the ads they are running on Facebook.

The tool spotlights people and organizations vying for voters’ attention on social media and brings more transparency to online political discourse.

PoliDashboard is a data visualization tool to monitor political discourse in Canada. Photo credit: PoliDashboard/Ryerson University Social Media Lab.

The Facebook Political Ads Module shows information about active and inactive ads involving social issues, elections or politics across Facebook products in Canada and is automatically updated every four hours via the Facebook Ad Library API. The module generates two interactive charts showing all of the ads the advertiser is running, who they are targeting and where in Canada the ad was shown.

PoliDashboard automatically aggregates political ads purchased by an advertiser, displaying how individual advertisers in Canada deploy their ad budget, where the ads are shown and who they are targeting with each ad.

Targeted audiences

According to our analysis of parties’ ad spending on Facebook during the first two weeks of the campaign (Aug. 15 to 28), the Liberals, the Conservatives and the NDP ran most of their ads in the four largest provinces: Ontario, Québec, British Columbia and Alberta, which is to be expected as these are also the most vote-rich provinces. Almost all of the Bloc Québécois’s ads ran in Québec.

Both the Liberals and the NDP largely targeted women Facebook users, while the Conservative Party’s most frequently targeted audience consisted of men. The Bloc mostly targeted men in the 45-64 age group and women 65 and older. These findings are in line with the new survey data from Nanos Research showing that Conservatives are surging with male voters and Liberals with female voters.

Facebook’s Political Ads by the different political parties between Aug. 15 and 28. The national Green Party did not run any ads on Facebook during this period. Photo credit: PoliDashboard/Ryerson University Social Media Lab.

The most striking difference between the ad strategies of the different parties, however, was in the age group of targeted voters. The Liberals frequently targeted their ads towards seniors (especially people 65 and older). So did the Bloc, while the Conservatives aimed for middle-aged voters and the NDP went after younger voters.

PoliDashboard has also revealed that some political ads were shown to people who can not legally vote. Citizens under 18 in Canada cannot vote. Curiously, our data shows that some of the NDP’s political ads were shown to Facebook users under 18. Out of 334 ads run by the NDP, 46 of their ads were shown to Facebook users between 13-17 years old over 75,000 times.

However, without additional data about the targeting criteria used for these 46 ads, it is not possible for us to know why they were shown to underage users. It does not appear as though this underage group was specifically targeted by the party, since the same ads were shown to other age groups.

Behind the curtain

We are now aware of who parties are targeting with their ads. These glimpses into who is vying for voters’ attention on Facebook is a keen reminder of the fact that much of how Facebook functions is still a mystery to the public.

As more campaigns turn to social media to reach voters, the lack of transparency in digital political advertising and the role of algorithms in microtargeting raises many questions about accountability and transparency in our democratic processes.

At a minimum, transparency should include information about the criteria that political advertisers and Facebook use for targeting each ad. Without such information, it will be very difficult for political opponents, watchdog groups and election regulators to catch and flag falsehoods or engage in counterspeech.

The Taliban may have access to the biometric data of civilians who helped the U.S. military

Written by , Ryerson University. Photo credit: AP Photo/Rahmat Gul. Originally published in The Conversation.The Conversation

Taliban fighters stand guard at a checkpoint in Kabul, Afghanistan, on Aug. 18, 2021.

In 2007, the United States military began using a small, handheld device to collect and match the iris, fingerprint and facial scans of over 1.5 million Afghans against a database of biometric data. The device, known as Handheld Interagency Identity Detection Equipment (HIIDE), was initially developed by the U.S. government as a means to locate insurgents and other wanted individuals. Over time, for the sake of efficiency, the system came to include the data of Afghans assisting the U.S. during the war.

Today, HIIDE provides access to a database of biometric and biographic data, including of those who aided coalition forces. Military equipment and devices — including the collected data — are speculated to have been captured by the Taliban, who have taken over Afghanistan.

This development is the latest in many incidents that exemplify why governments and international organizations cannot yet securely collect and use biometric data in conflict zones and in their crisis responses.

Building biometric databases

Biometric data, or simply biometrics, are unique physical or behavioural characteristics that can be used to identify a person. These include facial features, voice patterns, fingerprints or iris features. Often described as the most secure method of verifying an individual’s identity, biometric data are being used by governments and organizations to verify and grant citizens and clients access to personal information, finances and accounts.

According to a 2007 presentation by the U.S. Army’s Biometrics Task Force, HIIDE collected and matched fingerprints, iris images, facial photos and biographical contextual data of persons of interest against an internal database.

In a May 2021 report, anthropologist Nina Toft Djanegara illustrates how the collection and use of biometrics by the U.S. military in Iraq set the precedent for similar efforts in Afghanistan. There, the “U.S. Army Commander’s Guide to Biometrics in Afghanistan” advised officials to “be creative and persistent in their efforts to enrol as many Afghans as possible.” The guide recognized that people may hesitate to provide their personal information and therefore, officials should “frame biometric enrolment as a matter of ‘protecting their people.’”

Inspired by the U.S. biometrics system, the Afghan government began work to establish a national ID card, collecting biometric data from university students, soldiers and passport and driver license applications.

Although it remains uncertain at this time whether the Taliban has captured HIIDE and if it can access the aforementioned biometric information of individuals, the risk to those whose data is stored on the system is high. In 2016 and 2017, the Taliban stopped passenger buses across the country to conduct biometric checks of all passengers to determine whether there were government officials on the bus. These stops sometimes resulted in hostage situations and executions carried out by the Taliban.

Placing people at increased risk

We are familiar with biometric technology through mobile features like Apple’s Touch ID or Samsung’s fingerprint scanner, or by engaging with facial recognition systems while passing through international borders. For many people located in conflict zones or rely on humanitarian aid in the Middle East, Asia and Africa, biometrics are presented as a secure measure for accessing resources and services to fulfil their most basic needs.

In 2002, the United Nations High Commissioner for Refugees (UNHCR) introduced iris-recognition technology during the repatriation of more than 1.5 million Afghan refugees from Pakistan. The technology was used to identify individuals who sought funds “more than once.” If the algorithm matched a new entry to a pre-existing iris record, the claimant was refused aid.

An Afghan internally displaced refugee receives winter necessities from the UNHCR in 2017. Photo credit: AP Photo/Rahmat Gul.

The UNHCR was so confident in the use of biometrics that it altogether decided not to allow disputes from refugees. From March to October 2002, 396,000 false claimants were turned away from receiving aid. However, as communications scholar Mirca Madianou argues, iris recognition has an error rate of two to three per cent, suggesting that roughly 11,800 claimants out of the alleged false claimants were wrongly denied aid.

Additionally, since 2018, the UNHCR has collected biometric data from Rohingya refugees. However, reports recently emerged that the UNHCR shared this data with the government of Bangladesh, who subsequently shared it with the Myanmar government to identify individuals for possible repatriation (all without the Rohingya’s consent). The Rohingya, like the Afghan refugees, were instructed to register their biometrics to receive and access aid in conflict areas.

The UNHCR collects the biometric data of refugees in Uganda.

In 2007, as the U.S. government was introducing HIIDE in Afghanistan, U.S. Marine Corps were walling off Fallujah in Iraq to supposedly deny insurgents freedom of movement. To get into Fallujah, individuals would require a badge, obtained by exchanging their biometric data. After the U.S. retreated from Iraq in 2020, the database remained in place, including all the biometric data of those who worked on bases.

Protecting privacy over time

Registering in a biometric database means trusting not just the current organization requesting the data but any future organization that may come into power or have access to the data. Additionally, the collection and use of biometric data in conflict zones and crisis response present heightened risks for already vulnerable groups.

While collecting biometric data is useful in specific contexts, this must be done carefully. Ensuring the security and privacy of those who could be most at risk and those who are likely to be compromised or made vulnerable is critical. If security and privacy cannot be ensured, then biometric data collection and use should not be deployed in conflict zones and crisis response.

As cyberattacks skyrocket, Canada needs to work with — and not hinder — cybersecurity experts

Written by , Ryerson University; , Ryerson University. Alexandre Debiève/Unsplash. Originally published in The Conversation.

Cyberattacks are on the rise, impacting people, systems, infrastructures and governments with potentially devastating and far-reaching effects. Most recently, these include the massive REvil ransomware attack and the discovery that the Pegasus spyware was tracking more than 1,000 people.

A common cause of cyberattacks involves the exploitation of security vulnerabilities. These are conditions or behaviours that can enable the breach, misuse and manipulation of data. Examples can include poorly written computer code or something as simple as failing to install a security patch.

Exploiting vulnerabilities

There can be particularly significant impacts when attackers exploit security vulnerabilities involving digital systems used by federal governments.

For example, in July 2015, the United States Office of Personnel Management announced that malicious hackers had exfiltrated highly sensitive personal information and fingerprints of roughly 21.5 million federal workers and their associates, due to a string of poor security practices and system vulnerabilities.

The massive data breach served as a wake-up call for the U.S. federal government. Barack Obama’s administration consequently announced the Department of Defense would be responsible for storing federal employee data.

Not long after that, the “Hack the Pentagon” pilot program was announced, where the U.S. government invited external experts to responsibly report security flaws.

In 2016, the Pentagon announced a program to help them identify security vulnerabilities.

This pilot paved the way for what has become a standard security practice used by the U.S. government. Since 2020, all American federal agencies have been required to enable the disclosure of security vulnerabilities.

Canada lagging behind

By comparison, our recent report found that the government of Canada is lagging behind countries like the U.S. by failing to welcome vulnerability reports from external experts.

We haven’t had an attack the size of the Office of Personnel Management breach in the U.S., but we aren’t immune either.

Consider the Equifax breach in 2017, when 19,000 Canadians were affected when attackers exploited a security vulnerability in an online customer portal.

In August 2020, the Canada Revenue Agency locked more than 5,000 user accounts due to cyberattacks partially enabled by the agency’s lack of two-factor authentication.

Our report, published through the Cybersecure Policy Exchange at Ryerson University, is the first publicly available research that examines how Canada treats the reporting of security flaws in comparison to other countries.

We discovered that while 60 per cent of G20 members have distinct and clear processes for reporting security vulnerabilities in public infrastructure, Canada does not.

There are four rows in a column of three and on the far right are procedures highlighted in red for disclosing security vulnerabilities. Next to it are flags which represent the countries who do these procedures correctly, while Canada on the far end has red boxes with a white x in each.
When assessing whether the Government of Canada meets standards for vulnerability disclosure in comparison to G20 members, we discovered that Canada is falling behind its peers. Photo Credit: Cybersecure Policy Exchange/ Ryerson University

Cybersecurity experts can disclose “cyber incidents” to the Canadian Centre for Cyber Security. But this term is defined so narrowly that it excludes vulnerabilities that have not yet been weaponized.

And while the United Kingdom and the U.S. governments have promised to make efforts to fix security flaws that are reported, the Canadian Centre for Cyber Security has made no such promise.

By not supporting and protecting security researchers in identifying vulnerabilities, these gaps ultimately put Canada and Canadians at greater risk.

Vulnerable systems, vulnerable people

Cybersecurity experts can face significant legal risks when they report security flaws to the Canadian government. Computer hacking is prohibited by the Criminal Code, and in certain circumstances by laws like the Copyright Act.

A table which has three columns in blue and white. On the far left there are security research activities and next to it are the laws someone could be charged with. In the last column there are summaries on what this law means.
Some of the legal risks in Canada for discovering and disclosing security vulnerabilities found in software and hardware. Photo Credit: Cybersecure Policy Exchange/ Ryerson University

But unlike in the Netherlands and the U.S., there is no legal framework here for reporting security vulnerabilities in good faith.

Canada’s current approach has a chilling effect on the disclosure of security weaknesses found not only in government systems, but also for all software and hardware.

This approach largely leaves cybersecurity researchers in the dark about whether — and how — they should notify the government when they spot security flaws that could be exploited.

A cybersecure Canada requires working with experts who identify the security risks faced by our institutions and infrastructure.

It’s not too late for the federal government to institute a process allowing experts to report security flaws, and to draw on best practices while doing so.

Our work outlines the importance of defining who can submit vulnerability reports, and describes what the reporting and fixing process can look like. It’s important to credit or recognize the experts who disclosed. The public should be given information about vulnerabilities and the solutions required to fix them.

On a dark blue background there is text written in white letters which reads Phases of Vulnerability Disclosure and underneath a women sits at a peach coloured desk. Surrounding her are peach coloured text boxes describing the phases of disclosing vulnerability.
The phases of vulnerability disclosure: discovery, reporting, validation and triage, developing a solution, applying that solution, and informing the public. Photo Credit: Cybersecure Policy Exchange/ Ryerson University

Imperative improvements

Cybersecurity experts are “a significant but underappreciated resource” when it comes to reducing security risks of government systems. They want to help.

The Canadian government needs to implement clearer processes and policies to foster co-operation with cybersecurity experts working in the public interest.

As cyberattacks grow in frequency, scale and sophistication, better cybersecurity practices in Canada are not just desirable — they are imperative.

Private messages contribute to the spread of COVID-19 conspiracies

Written by , Ryerson University; , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

Private messaging apps allow information to spread in an unchecked manner.

The COVID-19 global pandemic has been accompanied by misinformation about the virus, its origins and how it spreads.

One in seven Canadians thinks there is some truth to the claim that Bill Gates is using the coronavirus to push a vaccine with a microchip capable of tracking people. Those who believe this and other COVID-19 conspiracy theories are much more likely to get their news from social media platforms like Facebook or Twitter.

In extreme cases, conspiracy thinking spurred by online disinformation can result in hate-fuelled violence, as we saw in the insurrection at the U.S. Capitol, the Québec City mosque shooting, the Toronto van attack and the incident in 2020 where an armed man crashed his truck through the gates of Rideau Hall.

Moderate content

These and other events have placed pressure on social media platforms to label, remove and slow the spread of harmful, publicly viewable content. As a result of implemented responses to the spread of misinformation, Donald Trump was deplatformed during the final weeks of his presidency.

These discussions on content moderation have mainly centred around platforms where content is generally open and accessible to view, comment on and share. But what’s happening in those online spaces that aren’t open for all to see? It’s much harder to say. And perhaps not surprisingly, conspiracy theories are spreading on private messaging apps, like WhatsApp, Telegram, Messenger and WeChat, to spread harm.

By leveraging large groups of users and long chains of forwarded messages, false information can still go viral on private platforms.

White nationalists and other extremist groups are trying to use messaging apps to organize, and malicious hackers are using private messages to conduct cybercrime. False stories spreading on messaging apps have also led to real-world violence, as happened in India and the United Kingdom.

Trust and private communication

We conducted a survey of 2,500 Canadian residents in March 2021 and found that they’re increasingly using private messaging platforms to get their news.

Overall, 21 per cent said that they rely on private messages for news — up from 11 per cent in 2019. We also found that people who regularly receive their news through messaging apps are more likely to believe COVID-19 conspiracy theories, including the false claim that vaccines include microchips.

There is a level of intimacy in private messaging apps that’s different from news viewed on social media feeds or other platforms, with content shared directly by people we often know and trust. A majority of Canadians reported that they had a similar level of trust in the news they receive on private messaging apps as they do in the news from TV or news websites.

Our research also uncovered a uniquely Canadian phenomenon. As a multicultural society with many newcomers, the Canadian private messaging landscape is remarkably diverse. For example, people who have arrived in Canada in the last 10 years were more than twice as likely to use WhatsApp. Similarly, newcomers from China were five times more likely to use WeChat.

We also found that half of Canadians receive messages that they suspect are false at least a few times per month, and that one in four receive messages with hate speech at least monthly. These rates were higher among people of colour. Because different apps provide different ways of spreading and mitigating harmful content, each requires a tailored strategy.

A graph showing the self-reported frequency of receiving harmful private messages in a representative survey of Canadian residents.
A graph showing the self-reported frequency of receiving harmful private messages in a representative survey of Canadian residents. Photo credit: Cybersecure Policy Exchange, Ryerson University.

Mitigating harm

Platforms and governments around the world are grappling with the tension between mitigating online harms and protecting the democratic values of free expression and privacy, particularly among more private modes of communication. This tension is only exacerbated by some platforms’ use of privacy-preserving end-to-end encryption that ensures only the sender and receiver can read the messages.

Some messaging apps have been experimenting with how to reduce the spread of harmful materials, including the introduction of limits on group sizes and on the number of times a message can be forwarded. WhatsApp is now testing a feature that nudges users to verify the source of highly forwarded messages by linking to a Google search of the message content. Some experts are also advancing the idea of adding warning labels to false news shared in messages — a concept that a majority (54 per cent) of Canadians supported when we described the idea.

Some examples of private messaging app features that could reduce harms, such as group size or message forwarding limits. Photo credit: Cybersecure Policy Exchange, Ryerson University.

However, there is certainly more that governments can do in this quickly moving area. More transparency is required from messaging platforms about how they’re responding to user reports of harmful material and what approaches they’re using to stall the spread of these messages. Governments can also support digital literacy efforts and invest in research about harms through private messaging in Canada.

As Canadians shift to more private modes of communication, policy needs to keep up to maintain a vibrant and cohesive democracy in Canada while protecting free expression and privacy.

Canada should be transparent in how it uses AI to screen immigrants

Written by , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

The Canadian government’s employment of AI technology needs to be transparent.

Like other governments around the world, the Canadian federal government has turned to technology to improve the quality and efficiency of its public services and programs. Many of these improvements are powered by artificial intelligence (AI), which can raise concerns when introduced to deliver services to vulnerable communities.

To ensure responsible use of AI, the Canadian government developed the “algorithmic impact assessment” tool, which determines the impact of automated decision systems.

Pilot project

The algorithmic impact assessment was introduced in April 2020, and very little is known about how it was developed. But one of the projects that informed its development has garnered concern from media: the Immigration, Refugees and Citizenship Canada’s (IRCC) AI pilot project.

The AI pilot project introduced by IRCC in 2018 is an analytics-based system that sorts though a portion of temporary resident visa applications from China and India. IRCC has previously explained that because its temporary resident visa AI pilot was one of the most concrete examples of AI in government at the time, IRCC directly engaged with and provided feedback to the Treasury Board Secretariat of Canada in the development of the algorithmic impact assessment.

Not much is publicly known about IRCC’s AI pilot project. The Canadian government has been selective about sharing information on how exactly it is using AI to deliver programs and services.

A 2018 report by the Citizen Lab investigated how the Canadian government may be using AI to augment and replace human decision-making in Canada’s immigration and refugee system. During the report’s development, 27 separate access to information requests were submitted to the Government of Canada. By the time the report was published, all remained unanswered.

Minister of Immigration, Refugees and Citizenship Ahmed Hussen responds to questions about Canada’s use of artificial intelligence to help screen and process immigrant visa applications during question period in the House of Commons on Sept. 18, 2018. Photo credit: THE CANADIAN PRESS/Adrian Wyld.

The case of New Zealand

While the algorithmic impact assessment is a step in the right direction, the government needs to release information about what it claims is one of the most concrete examples of AI. Remaining selectively silent may lead the Canadian government to fall victim to the allure of AI, as happened in New Zealand.

In New Zealand, a country known for its positive immigration policy, reports emerged that Immigration New Zealand had deployed a system to track and deport “undesirable” migrants. The data of 11,000 irregular immigrants — who attempt to enter the country outside of regular immigration channels — was allegedly being used to forecast how much each irregular migrant would cost New Zealand. This information included age, gender, country of origin, visa held upon entering New Zealand, involvement with law enforcement and health service usage. Coupled with other data, this information was reportedly used to identify and deport “likely troublemakers.”

Concerns surrounding Immigration New Zealand’s harm model ultimately drove the New Zealand government to take stock of how algorithms were being used to crunch people’s data. This assessment set the foundation for systematic transparency on the development and use of algorithms, including those introduced to manage migration.

Conversely, in Canada, advanced analytics are used to sort applications into groups of varying complexity. More specifically, in Canada, temporary resident visa applications are reviewed for eligibility and admissibility.

The Canadian pilot is an automated system trained on rules established by experienced officers to identify characteristics in applications that indicate a higher likelihood of ineligibility. For straightforward applications, the system approves eligibility solely based on the model’s determination, while eligibility for more complex applications is decided upon by an immigration officer. All applications are reviewed by an immigration officer for admissibility.

A report by public broadcaster RNZ on Immigration New Zealand’s data profiling.

Levels of review

For New Zealand, publishing information on how, why and where the government was using AI offered the opportunity to provide feedback and make recommendations. These efforts led to the New Zealand government developing an Algorithm Charter on the use of algorithms by government agencies. More importantly, the public can now understand how the government is experimenting with new capabilities and offer their input.

Although IRCC has been careful in deploying AI to manage migration, there is great benefit in being transparent about its endeavours involving AI. By engaging in open innovation and making information about IRCC’s AI pilot project public, the government can start having meaningful conversations, sparking thoughtful innovation and encouraging public trust in its application of emerging technologies.

How game worlds are preparing humanitarian workers for high-stakes scenarios

Written by , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

The World Health Organization is building a game world to allow medical practitioners to admit virtual patients for emergency treatment during a mass casualty simulation.

The pandemic has bred a new dependence on online technologies for work and social engagement. Immersive technology such as that used in 3D video games, virtual reality and augmented reality can be designed now so that the person experiencing them is transported into a socially rich online world.

This began with the design of massive online role playing games and continues with other platforms for living in an altered digital reality with purposeful activity, such as the platform Second Life.

Introduction to Second Life.

During pandemic shutdowns, online role playing gamers have still had access to extensive social connections with many people in virtual worlds. Players communicated free of charge, with hundreds of other people on the real-time voice server Discord.

The combination of an immersive 3D video game and real-time voice communications created a reassuring space when the external world was cut off.

But game worlds are not just for recreational community. This form of immersion based on a desktop computer experience has now reached the medical and humanitarian fields as well.

Building game world simulations

A game world is an alternate world that can transport users to virtual worlds. This means that in corporate, academic or life coaching settings, people can also learn and practise extending their skills in virtual space.

Some life coaches use game worlds to help people imagining alternative settings or outcomes. Video from consultant Katharina Kaifler.

Perhaps most importantly, a game world is not a game. It has no winning or losing conditions. It is simply an immersive fantasy world that is created with the intention of promoting interaction with its environment. Visiting a game world is like visiting a city or a continent, or even the inside of an emergency room where rules, called game mechanics, govern the players’ abilities. It is not a game in that it is not designed to entertain, but to both entertain and change behaviour.

A game world has a few core essential components. The first is a narrative, or story. If it were used for medical education for example, the world could be the inside of an emergency room.

Game worlds for medical and aid workers

Currently, the World Health Organization (WHO) is building such simulations. By using a 360 camera we can record any emergency department in the world, then translate that into a 3D model which can be viewed on a desk top or enlarged so that the user is standing inside that virtual reproduction.

The WHO Learning Academy is building code to admit simulated patients, each one with its own life path. Virtual lives can be saved by managing the flow of patients during a mass casualty simulation. The software can predict how many minutes can be saved by careful triage.

Game worlds can be more fantastical and oriented to increase enjoyment of learning, particularly when the subject matter is complex. The UN office for Disaster Risk Reduction and the the UN World Food Program have produced video games for learning about the respective issues they respond to.

Currently with the World Food Program, our team — consisting of a video game company, learning professionals and UN subject experts in Rome — is building a fully immersive exploration game. We are working on building a game world for UN staff that will help them learn how to protect vulnerable populations and how to be accountable in their field work.

Learning through game worlds will help some World Food Program staff practise decision making. Here, a worker loads a truck in Les Cayes, Haiti, in November 2016. Photo credit: THE CANADIAN PRESS/Paul Chiasson.

The game world features multiple engagement loops (things to do) that make it attractive to participate in.

When you have to teach the minutia of a 40-page manual in a few hours, a video game world is a sound approach. People’s capacities to recall text they read is limited in the short term, and their memory of it diminishes over the long term, but when people learn procedures through a video game world they show high engagement and retain the information.

Fantasy encompasses a simulation

Two terms have now become essential when describing what happens in game worlds: “autopoiesis” which means self organizing or self generating; and “hyper reality” which is a term developed by French post-modern sociologist Jean Baudrillard referring to “the generation by models” of something real “without origin or reality.”

A game world has its own “digital physics,” not real-world physics, thus separating it from simulation. A game world is a place where new things can be created and the person lives among fantasy objects. Autopoetic hyper reality is a digital space where the player is enticed to complete goals in a fantasy that encompasses a simulation.

In ‘The World of Warcraft,’ players can create a character avatar and explore an open game world. Photo credit: Shutterstock.

Scholars across the field of digital media are now hard at work creating a kind of fusion of the the human nervous system with technology. What this means is that the boundary between one and the other will become imaginary, for example, in the instance of doctors using remote technologies to conduct medical procedures.

But the larger meaning is that as virtual reality continues to mature, we will gradually live more of our lives in digital space. We’ve seen many examples of this through the pandemic, including new uses of Zoom and social media to replace the workplace and face-to-face contact.

Digital game worlds are places we can live, play and work together across great distances while feeling we are in a reassuring place where we connect.

What are NFTs and why are people paying millions for them?

Written by , Ryerson University. Photo credit: Shutterstock. Originally published in The Conversation.The Conversation

A NFT is a digital file with verified identity and ownership.

Last week, Christie’s sold a digital collage of images called “Everydays: The First 5000 Days” for US$69.3 million dollars. This week, Elon Musk said he’s selling a tweet of his as an NFT, which contains a song about NFTs.

The bidding on Musk’s tweet has already topped $1 million and millions more are pouring into the market — he has since tweeted, “Actually, doesn’t feel quite right selling this. Will pass.” And sites like NBA Top Shot (where you can buy, sell and trade digital NBA cards) have individual cards selling for over US$200,000.

It might sound ridiculous but the explosive market of crypto-collectibles and crypto-art is no joke. I investigate cryptocurrencies and have academic publications on Bitcoin markets. To help you understand what an NFT is and why they’re becoming so popular, here’s an explainer to make sense of it all.

What is an NFT?

A non-fungible token (NFT) is a digital file with verified identity and ownership. This verification is done using blockchain technology. Blockchain technology, simply put, is an un-hackable system based on the mathematics of cryptography. So, that’s why you hear a lot of “crypto” when referring to NFTs — crypto-art, crypto-collectibles, etc.

What is fungibility?

Fungibility is the ability of an asset to be interchanged with other individual assets of the same kind; it implies equal value between the assets. If you own a fungible asset you can readily interchange it for another of a similar kind. Fungible assets simplify the exchange and trade processes, and the best example would be (you guessed it) money.

Is NFT the same as Bitcoin?

This is where I can explain and emphasize the “non-fungibility” property of NFTs. The main difference between NFTs and Bitcoins is the fact that Bitcoins are limited, and fungible (you can trade one Bitcoin with another and both have the same value and price). NFTs are unique but unlimited, and non-fungible (no two artworks are the same). While NFTs can appreciate in value (just like real estate), they cannot be interchanged for another NFT.

Blockchain technology, simply put, is an un-hackable system based on the mathematics of cryptography. Photo credit: Shutterstock.

What does this mean for the future of money?

While not directly related to NFTs, it’s important to mention some properties of money. Among many properties, money has to be fungible (one unit is viewed as interchangeable as another), and divisible (can be divided into smaller units of value). NFTs are not fungible and while Bitcoin is fungible, it is not divisible.

For example, a single dollar is easily convertible into four quarters or ten dimes, but currently you cannot divide one Bitcoin into smaller units. In fact, fungibility and divisibility are part of five requirements for a currency to exist in a regulated economy.

Why are NFTs being valued?

The importance of NFTs lies in providing the ability to securely value, purchase and exchange digital art using a digital ledger. NFTs started in online gaming, later with Nike’s patenting of its authenticity (CryptoKicks) and then by the famous Christie’s auction embracing NFT valuation of a digital art piece.

NFTs are commonly created by uploading files, such as digital artwork, to an auction market. Just like any other form of art, NFTs are not mutually interchangeable, making them more like “collectible” items.

The platform (typically Ethereum) allows the digital art to be “tokenized” and for the ownership to be safely stored using a decentralized, open-source blockchain (that is, anyone can check the ledger), featuring smart contract functionality. This means the traditional role of a “middle man” for selling the art is now digitized.

Is owning the NFTs the same as owning the copyright?

No, owning the NFT doesn’t grant you the copyright to the art; they are distinct from one another. The ownership of the NFT is established using a digital ledger, which anyone can access because it is stored openly. This ledger tracks who owns an NFT and ensures that the NFT can’t be duplicated or tampered with, essentially a “smart contract.”

What does the future hold for NFTs?

It is undeniable that digital assets and blockchain technology are changing the future of trade. As a result, NFTs are also at the helm of this positive growth. However, just like other examples in history (e.g. the Dutch Tulip, the dotcom bubble, etc.), certain valuations may see the need for future corrections depending on socio-economic desires and the chance of a bubble.

Every generation has its own niche attachment to certain valuations whether for vanity or other reasons. NFTs are currently very popular among younger generations, but whether this generation will have the economic power to purchase or find use for them in the future, is both a social and economic question.

For NFTs the true potential is yet to be uncovered. Whether big industry players in art, design or fashion will buy into it or not is also yet to be seen. One thing is for sure, NFTs did open the door for many digital artists to be identified and valued, and the smart contract functionalities of the blockchain technology will be used in future valuations of many assets.