Gen Z goes retro: Why the younger generation is ditching smartphones for ‘dumb phones’

Written by Omar H. Fares, Toronto Metropolitan University. Photo credit: Shutterstock. Originally published in The Conversation.

Sales of so-called “dumb phones,” like flip and slide phones, are on the rise among the younger generation.

There is a growing movement among Gen Z to do away with smartphones and revert back to “less smart” phones like old-school flip and slide phones. Flip phones were popular in the mid-1990s and 2000s, but now seem to be making a comeback among younger people.

While this may seem like a counter-intuitive trend in our technology-reliant society, a Reddit forum dedicated to “dumb phones” is steadily gaining in popularity. According to a CNBC new report, flip phones sales are on the rise in the U.S.

Gen Z’s interest in flip phones is the latest in a series of obsessions young people are having with the aesthetic of the 1990s and 2000s. Y2K fashion has been steadily making a comeback over the past few years and the use of vintage technology, like disposable cameras, is on the rise.

There are a few reasons why, including nostalgia and yearning for an idealized version of the past, doing a “digital detox” and increasing privacy concerns.

The power of nostalgia

Nostalgia is a complex emotion that involves reconnecting with the happy emotions of an idealized past by recalling positive memories.

Over the years, marketers have realized that nostalgia is a powerful way to evoke positive emotions — so much so that nostalgia marketing has become a recognized marketing strategy. It leverages positive memories and feelings associated with the past to create an emotional connection with consumers.

A wealth of research shows that nostalgia can result in consumers being willing to pay more, enhance brand ties, increased purchase intention and increased digital brand engagement.

Nostalgia may be a driving factor behind people purchasing flip phones because they evoke memories of a previous era in mobile communication.

But nostalgia marketing doesn’t just target the younger generation — it’s also a powerful tool for advertising to those who grew up using older mobile devices. Nokia is an example of a company that understands this well.

A YouTube advertisement for Nokia’s 2720 V Flip shows how brands can use nostalgia marketing to appeal to customers and drive product sales.

A marketing video about the Nokia 2720 V Flip, a modern take on the flip phones from the 2000s.

When older generations speak about objects from the past, they usually hearken back to “the golden era” or “golden time.” The comment section of the Nokia video showcases this kind of thinking.

One comment reads: “My first phone was a Nokia 2760! It was also a flip phone. This brings back good memories.” Another says: “I am definitely getting this just for good old memories. When life was easy.”

Digital detox

Another reason why people might be purchasing flip phones is to do a digital detox and cut down on screen time. A digital detox refers to a period of time when a person refrains from using their electronic devices, like smartphones, to focus on social connections in the physical world and reduce stress.

In 2022, people in the U.S. spent more than 4.5 hours daily on their mobile devices. In Canada, adults self-reported spending about 3.2 hours per day in front of screens in 2022. Children and youth had about three hours of screen time per day in 2016 and 2017.

Excessive smartphone usage can result in a number of harmful side effects, such as sleep disruption. Just over 50 per cent of Canadians check their smartphones before they go to sleep.

The blue-light emitted from smartphones may suppress melatonin production, making it harder to sleep and causing physiological issues including reduced glucose tolerance, increased blood pressure and increased inflammatory markers.

A man looking at a smartphone while lying in bed
Just over 50 per cent of Canadians check their smartphones before they go to sleep. Photo credit: Shutterstock

The increased level of digital connectivity and the pressure to respond instantly, especially in a post-pandemic world where many people work remotely, can lead to increased levels of anxiety and stress. Being constantly online can also lead to reduced social connectivity and can negatively impact personal relationships and social skills.

The constant digital noise and multi-tasking nature of smartphones and apps like TikTok can lead to decreased attention spans. From my personal observations in the classroom, I’ve noticed students find it difficult to concentrate for prolonged periods of time.

condition known as text neck can also occur when a person spends extended periods of time looking down at an electronic device. The repetitive strain of holding the head forward and down can cause discomfort and pain in the neck.

As people become more aware of the potential side effects of excessive screen time and constant digital connectivity, some are choosing to digitally detox. Flip phones are a way people can limit their exposure to digital noise and build a healthier relationship with technology.

Privacy concerns

Smartphones have a long list of advanced features such as cameras, GPS and tons of mobile applications — all of which can store and access a significant list of personal data.

In some cases, personal data can be used for targeted advertisements, but in worst cases that information can be leaked as part of a data breach. More and more people are growing concerned with how their data is being collected, shared and used by companies and online platforms.

A handing holding a flip cellphone over a table covered with an assortment of smartphones.
The Motorola Razr was a type of flip phone that was extremely popular in the mid-2000s. Photo credit: Shutterstock

It’s natural to feel worried about the potential misuse of our personal information. That’s why some people are taking matters into their own hands and seeking out creative ways to limit the amount of data being collected about them.

Old-fashioned flip phones generally have fewer features that collect and store personal data compared to smartphones. That can make them a more attractive option for people concerned with privacy, data breaches or surveillance.

But this trend doesn’t mean smartphones are going out of style. There are still millions of smartphones being shipped worldwide every year. The trend may result in users opting to own both a smartphone and a flip phone, allowing users to digitally detox and reduce screen time without sacrificing the benefits of social media.

ChatGPT’s greatest achievement might just be its ability to trick us into thinking that it’s honest

Written by Richard Lachman, Toronto Metropolitan University. Photo Credit Shutterstock. Originally published in The Conversation

AI chatbots are designed to convincingly sustain a conversation.

In American writer Mark Twain’s autobiography, he quotes — or perhaps misquotes — former British Prime Minister Benjamin Disraeli as saying: “There are three kinds of lies: lies, damned lies, and statistics.”

In a marvellous leap forward, artificial intelligence combines all three in a tidy little package.

ChatGPT, and other generative AI chatbots like it, are trained on vast datasets from across the internet to produce the statistically most likely response to a prompt. Its answers are not based on any understanding of what makes something funny, meaningful or accurate, but rather, the phrasing, spelling, grammar and even style of other webpages.

It presents its responses through what’s called a “conversational interface”: it remembers what a user has said, and can have a conversation using context cues and clever gambits. It’s statistical pastiche plus statistical panache, and that’s where the trouble lies.

Unthinking, but convincing

When I talk to another human, it cues a lifetime of my experience in dealing with other people. So when a program speaks like a person, it is very hard to not react as if one is engaging in an actual conversation — taking something in, thinking about it, responding in the context of both of our ideas.

Yet, that’s not at all what is happening with an AI interlocutor. They cannot think and they do not have understanding or comprehension of any sort.

Presenting information to us as a human does, in conversation, makes AI more convincing than it should be. Software is pretending to be more reliable than it is, because it’s using human tricks of rhetoric to fake trustworthiness, competence and understanding far beyond its capabilities.

There are two issues here: is the output correct; and do people think that the output is correct?

The interface side of the software is promising more than the algorithm-side can deliver on, and the developers know it. Sam Altman, the chief executive officer of OpenAI, the company behind ChatGPT, admits that “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.”

That still hasn’t stopped a stampede of companies rushing to integrate the early-stage tool into their user-facing products (including Microsoft’s Bing search), in an effort not to be left out.

Fact and fiction

Sometimes the AI is going to be wrong, but the conversational interface produces outputs with the same confidence and polish as when it is correct. For example, as science-fiction writer Ted Chiang points out, the tool makes errors when doing addition with larger numbers, because it doesn’t actually have any logic for doing math.

It simply pattern-matches examples seen on the web that involve addition. And while it might find examples for more common math questions, it just hasn’t seen training text involving larger numbers.

It doesn’t “know’ the math rules a 10-year-old would be able to explicitly use. Yet the conversational interface presents its response as certain, no matter how wrong it is, as reflected in this exchange with ChatGPT.

User: What’s the capital of Malaysia?

ChatGPT: The capital of Malaysia is Kuala Lampur.

User: What is 27 * 7338?

ChatGPT: 27 * 7338 is 200,526.

It’s not.

Generative AI can blend actual facts with made-up ones in a biography of a public figure, or cite plausible scientific references for papers that were never written.

That makes sense: statistically, webpages note that famous people have often won awards, and papers usually have references. ChatGPT is just doing what it was built to do, and assembling content that could be likely, regardless of whether it’s true.

Computer scientists refer to this as AI hallucination. The rest of us might call it lying.

Intimidating outputs

When I teach my design students, I talk about the importance of matching output to the process. If an idea is at the conceptual stage, it shouldn’t be presented in a manner that makes it look more polished than it actually is — they shouldn’t render it in 3D or print it on glossy cardstock. A pencil sketch makes clear that the idea is preliminary, easy to change and shouldn’t be expected to address every part of a problem.

The same thing is true of conversational interfaces: when tech “speaks” to us in well-crafted, grammatically correct or chatty tones, we tend to interpret it as having much more thoughtfulness and reasoning than is actually present. It’s a trick a con-artist should use, not a computer.

A hand holding a phone screen showing a live chat with the text HI HOW CAN I HELP YOU?
Chatbots are increasingly being used by technology companies in user-facing products. Photo credit: Shutterstock.

AI developers have a responsibility to manage user expectations, because we may already be primed to believe whatever the machine says. Mathematician Jordan Ellenberg describes a type of “algebraic intimidation” that can overwhelm our better judgement just by claiming there’s math involved.

AI, with hundreds of billions of parameters, can disarm us with a similar algorithmic intimidation.

While we’re making the algorithms produce better and better content, we need to make sure the interface itself doesn’t over-promise. Conversations in the tech world are already filled with overconfidence and arrogance — maybe AI can have a little humility instead.

The next phase of the internet is coming: Here’s what you need to know about Web3

Written by Adrian Ma, Toronto Metropolitan University. Photo credit: Shutterstock. Originally published in The Conversation.

The terms Web3 and Web 3.0 are often used interchangeably, but they are different concepts.

The rapid growth of cryptocurrencies and virtual non-fungible tokens have dominated news headlines in recent years. But not many may see how these modish applications connect together in a wider idea being touted by some as the next iteration of the internet — Web3.

There are many misconceptions surrounding this buzzy (and, frankly, fuzzy) term, including the conflation of Web3 with Web 3.0. Here’s what you need to know about these terms.

What is Web3?

Since Web3 is still a developing movement, there’s no universal agreement among experts about its definition. Simply put, Web3 is envisioned to be a “decentralized web ecosystem,” empowering users to bypass internet gatekeepers and retain ownership of their data.

This would be done through blockchain; rather than relying on single servers and centralized databases, Web3 would run off of public ledgers where data is stored on computer networks that are chained together.

A decentralized Web3 would fundamentally change how the internet operates — financial institutions and tech companies would no longer need to be intermediaries of our online experiences.

As one business reporter put it:

“In a Web3 world, people control their own data and bounce around from social media to email to shopping using a single personalized account, creating a public record on the blockchain of all of that activity.”

Web3’s blockchain-based infrastructure would open up intriguing possibilities by ushering in the era of the “token economy.” The token economy would allow users to monetize their data by providing them with tokens for their online interactions. These tokens could offer users perks or benefits, including ownership stakes in content platforms or voting rights in online communities.

To better understand Web3, it helps to step back and see how the internet developed into what it is now.

Web 1.0: The ‘read-only’ web

Computer scientist Tim Berners-Lee is credited with inventing the world wide web in 1989, which allowed people to hyperlink static pages of information on websites accessible through internet browsers.

Berners-Lee was exploring more efficient ways for researchers at different institutions to share information. In 1991, he launched the world’s first website, which provided instructions on using the internet.

A middle-aged man in a suit sits in an arm chair speaking into a microphone.
Tim Berners-Lee, the inventor of the World Wide Web, speaks at the Open Government Partnership Global Summit in Ottawa in May 2019. Photo credit: THE CANADIAN PRESS/Justin Tang.

These basic “read-only” websites were managed by webmasters who were responsible for updating users and managing the information. In 1992, there were 10 websites. By 1994, after the web entered the public domain, there were 3,000.

When Google arrived in 1996 there were two million. Last year, there were approximately 1.2 billion websites, although it is estimated only 17 per cent are still active.

Web 2.0: The social web

The next major shift for the internet saw it develop from a “read-only web” to where we are currently — a “read-write web.” Websites became more dynamic and interactive. People became mass participants in generating content through hosted services like Wikipedia, Blogger, Flickr and Tumblr.

The idea of “Web 2.0” gained traction after technology publisher Tim O’Reilly popularized the term in 2004.

Later on, social media platforms like Facebook, YouTube, Twitter and Instagram and the growth of mobile apps led to unparalleled connectivity, albeit through distinct platforms. These platforms are known as walled gardens because their parent companies heavily regulate what users are able to do and there is no information exchange between competing services.

Tech companies like Amazon, Google and Apple are deeply embedded into every facet of our lives, from how we store and pay for our content to the personal data we offer (sometimes without our knowledge) to use their wares.

Web3 vs. Web 3.0

This brings us to the next phase of the internet, in which many wish to wrest back control from the entities that have come to hegemonize it.

The terms Web3 and Web 3.0 are often used interchangeably, but they are different concepts.

Web3 is the move towards a decentralized internet built on blockchain. Web 3.0, on the other hand, traces back to Berners-Lee’s original vision for the internet as a collection of websites linking everything together at the data level.

Our current internet can be thought of as a gigantic document depot. Computers are capable of retrieving information for us when we ask them to, but they aren’t capable of understanding the deeper meaning behind our requests.

A hand holding a cellphone displaying a group of social media platform icons.
In a Web 3.0 world, users would be able to link personal information across social media platforms. Photo credit: Shutterstock.

Information is also siloed into separate servers. Advances in programming, natural language processing, machine learning and artificial intelligence would allow computers to discern and process information in a more “human” way, leading to more efficient and effective content discovery, data sharing and analysis. This is known as the “semantic web” or the “read-write-execute” web.

In Berners-Lee’s Web 3.0 world, information would be stored in databases called Solid Pods, which would be owned by individual users. While this is a more centralized approach than Web3’s use of blockchain, it would allow data to be changed more quickly because it wouldn’t be distributed over multiple places.

It would allow, for example, a user’s social media profiles to be linked so that updating the personal information on one would automatically update the rest.

The next era of the internet

Web3 and Web 3.0 are often mixed up because the next era of the internet will likely feature elements of both movements — semantic web applications, linked data and a blockchain economy. It’s not hard to see why there is significant investment happening in this space.

But we’re just seeing the tip of the iceberg when it comes to the logistical issues and legal implications. Governments need to develop new regulations for everything from digital asset sales taxation to consumer protections to the complex privacy and piracy concerns of linked data.

There are also critics who argue that Web3, in particular, is merely a contradictory rebranding of cryptocurrency that will not democratize the internet. While it’s clear we’ve arrived at the doorstep of a new internet era, it’s really anyone’s guess as to what happens when we walk through that door.

ChatGPT could be a game-changer for marketers, but it won’t replace humans any time soon

Written by Omar H. Fares, Toronto Metropolitan University. Photo credit: Shutterstock. Originally published in The Conversation.

A new AI chatbot could revolutionize marketing for businesses.

The recent release of the ChatGPT chatbot in November 2022 has generated significant public interest. In essence, ChatGPT is an AI-powered chatbot allowing users to simulate human-like conversations with an AI.

GPT stands for Generative Pre-trained Transformer, a language processing model developed by the American artificial intelligence company OpenAI. The GPT language model uses deep learning to produce human-like responses. Deep learning is a branch of machine learning that involves training artificial neural networks to mimic the complexity of the human brain, to produce human-like responses.

ChatGPT has a user-friendly interface that utilizes this technology, allowing users to interact with it in a conversational manner.

In light of this new technology, businesses and consumers alike have shown great interest in how such an innovation could revolutionize marketing strategies and customer experiences.

What’s so special about ChatGPT?

What sets ChatGPT apart from other chatbots is the size of its dataset. Chatbots are usually trained on a smaller dataset in a rule-based manner designed to answer specific questions and conduct certain tasks.

ChatGPT, on the other hand, is trained on a huge dataset — 175 billion parameters and 570 gigabytes — and is able to perform a range of tasks in different fields and industries. 570GB is equivalent to over 385 million pages on Microsoft Word.

Given the amount of the data, ChatGPT can carry out different language-related activities which includes answering questions in different fields and sectors, providing answers in different languages and generating content.

A picture of the OpenAI website showing a passaged describing ChatGPT.
ChatGPT is a chatbot that was launched by OpenAI in November 2022. Photo credit: Shutterstock.

Friend or foe to marketers?

While ChatGPT may be a tremendous tool for marketers, it is important to understand the realistic possibilities and expectations of it to get the most value from it.

Traditionally, with the emergence of new technologies, consumers tend to go through Gartner’s hype cycle. In essence, Gartner’s cycle explains the process people go through when adopting a new technology.

The cycle starts with the innovation trigger and peak of inflated triggers stages when consumers get enthusiastic about new technology and expectations start to build. Then consumers realize the pitfalls of the technology, creating a gap between expectations and reality. This is called the trough of disillusionment.

This is followed by the slope of enlightenment when consumers start to understand the technology and use it more appropriately and reasonably. Finally, the technology becomes widely adopted and used as intended during the plateau of productivity stage.

With the current public excitement surrounding ChatGPT, we appear to be nearing the peak of inflated triggers stage. It’s important for marketers to set realistic expectations for consumers and navigate the integration of ChatGPT to mitigate the affects of the trough of disillusionment stage.

Possibilities of ChatGPT

In its current form, ChatGPT cannot replace the human factor in marketing, but it could support content creation, enhance customer service, automate repetitive tasks and support data analysis.

Supporting content creation: Marketers may use ChatGPT to enhance existing content by using it to edit written work, make suggestions, summarize ideas and improve overall copy readability. Additionally, ChatGPT may enhance search engine optimization strategy by examining ideal keywords and tags.

Enhancing customer service: Businesses may train ChatGPT to respond to frequently asked questions and interact with customers in a human-like conversation. Rather than replacing the human factor, ChatGPT could provide 24/7 customer support. This could optimize business resources and enhance internal processes by leaving high-impact and sensitive tasks to humans. ChatGPT can also be trained in different languages, further enhancing customer experience and satisfaction.

ChatGPT chat bot screen seen on smartphone and laptop display with Chat GPT login screen on the background.
It’s important to understand the realistic possibilities and expectations of new and emerging technologies. Photo credit: Shutterstock.

Automating repetitive marketing tasks: According to a 2015 HubSpot report, marketers spent a significant amount of their time on repetitive tasks, such as sending emails and creating social media posts. While part of that challenge has been addressed with customer relationship management software, ChatGPT may enhance this by providing an added layer of personalization through the generation of creative content.

Additionally, ChatGPT may be helpful in other tasks, such as product descriptions. With access to a wealth of data, ChatGPT would be able to frequently update and adjust product descriptions, allowing marketers to focus on higher-impact tasks.

Limitations of ChatGPT

While the wide range of possibilities for enhancing marketing processes with ChatGPT are enticing, it is important for businesses to know about some key limitations and when to limit or avoid using ChatGPT in business operations.

Emotional intelligence: ChatGPT provides a state of the art human-like response and content. However, it is important to be aware that the tool is only human-like. Similar to traditional challenges with chatbots, the degree of human-likeness will be essential for process enhancement and content creation.

Marketers could use ChatGPT to enhance customer experience, but without humans to provide relevancy, character, experience and personal connection, it will be challenging to fully capitalize on ChatGPT. Relying on ChatGPT to build customer connections and engagement without the involvement of humans may diminish meaningful customer connection instead of enhancing it.

Accuracy: While the marketing content may appear logical, it is important to note that ChatGPT is not error free and may provide incorrect and illogical answers. Marketers need to review and validate the content generated by ChatGPT to avoid possible errors and ensure consistency with brand message and image.

Creativity: Relying on ChatGPT for creative content may cause short- and long-term challenges. ChatGPT lacks the lived experience of individuals and understanding the complexity of human nature. Over-relying on ChatGPT may limit creative abilities, so it should be used to support ideation and enhance existing content while still allowing room for human creativity.

Humans are irreplaceable

While ChatGPT has the potential to enhance marketing effectiveness, businesses should only use the technology as a tool to assist humans, not replace them. ChatGPT could provide creative content and support content ideation. However, the human factor is still essential for examining outputs and creating marketing messages that are consistent with a firm’s business strategy and vision.

A business that does not have a strong marketing strategy before integrating ChatGPT remains at a competitive disadvantage. However, with appropriate marketing strategies and plans, ChatGPT could effectively enhance and support existing marketing processes.

Canada needs to consider the user experience of migrants when designing programs that impact them

Written by Lucia Nalbandian, University of Toronto and Nick Dreher, Toronto Metropolitan University. Photo credit THE CANADIAN PRESS/Nathan Denette. Originally published in The Conversation.

People walk through Pearson International Airport in Toronto in March 2020.

The first interaction many Canadians have with government services today is digital. Older Canadians turn to the internet to understand how to file for Old Age Security or track down a customer service phone number. Parents visit school district websites for information on school closures, schedules and curricula.

These digital offerings present an opportunity to enhance the quality of services and improve citizens’ experiences by taking a human-centred design approach.

Our research has revealed that governments across the globe are increasingly leveraging technology in immigration and integration processes. As Canadian government services focus on improving the experience of their citizens, efforts should be extended to future citizens as well.

Immigrants are a vital part of the Canadian economy and social fabric. In announcing Canada’s new immigration target of 500,000 permanent residents per year by 2025, Immigration Minister Sean Fraser said the numbers strike a balance between our economic needs and international obligations.

Bar graph showing the increasing amounts of immigrants Canada plans to welcome into the country over the years.
Canada’s new Immigration Levels Plan aims to welcome 465,000 new permanent residents in 2023, 485,000 in 2024 and 500,000 in 2025. Photo credit: (Statistics Canada), Author provided.

Despite the importance of immigrants for the Canadian economy and national identity, it remains to be seen if immigrants are engaged in the development of policies, services and technological tools that impact them.

Advancements in the immigration sector

Canada has steadily been introducing digital technologies into services, programs and processes that impact migrants. This has especially been the case with the COVID-19 pandemic requiring organizations to innovate their services and programs without diminishing the overall quality of service.

Immigration scholar Maggie Perzyna developed the COVID-19 Immigration Policy Tracker to examine how the Immigration and Refugee Board and Immigration, Refugees and Citizenship Canada (IRCC) used digital tools to enable employees to work from home. This helped reduce the administrative burden and increased efficiency.

A grid of people raise their hands during a virtual citizenship ceremony as seen on a mobile phone screen.
Participants raise their hands as they swear the oath to become Canadian citizens during a virtual citizenship ceremony held over livestream due to the COVID-19 pandemic, on July 1, 2020. Photo credit: THE CANADIAN PRESS/Justin Tang.

There continues to be a strong case for technological transformation in Canada’s immigration-focused departments, programs and services. As of July 31, 2022, Canada has a backlog of 2.4 million immigration applications.

In other words, Canada is failing to meet the application processing timelines it has set for itself for services —including passport renewal, refugee travel documents, and work and study permits.

While the Canadian government is trying to address these backlogs, there appears to be no discussion of asking immigrants about their journey through the application process. Rather, the government appears to centre employees. Still, issues with the Global Case Management System persist, driving current case management system issues for employees and government service users.

Virtual processes were prioritized

While the Canadian government previously introduced a machine learning pilot tool to sort through temporary resident visa applications, its use stagnated due to pandemic-related border closures. Instead, virtual processes and digitization were prioritized, including:

  • Shifting from paper to digitized applications for the following: spousal and economic class immigration, applicants to the Non-Express Entry Provincial Nominee Program, the Rural and Northern Immigration Pilot, the Agri-Food Pilot, the Atlantic Immigration Pilot, the Québec Selected Investor Program, the Québec Entrepreneur Program, the Québec Self-Employed Persons Program and protected persons.
  • Hosting digital hearings for spousal immigration applicants, pre-removal risk assessments and refugees using video-conferencing.
  • Introducing secure document exchanges and the ability to view case information across various immigration streams.
  • Offering virtual citizenship tests and citizenship ceremonies.

Additionally, Fraser announced further measures to improve the user experience, modernize the immigration system and address challenges faced by people using IRCC. This strategy will also involve using data analytics to aid officers in sorting and processing visitor visa applications.

A man in a suit and tie speaking into a microphone. Behind him stands a line of Canadian flags.
Immigration, Refugees and Citizenship Minister Sean Fraser speaks during a news conference with UN High Commissioner for Refugees Filippo Grandi in April 2022 in Ottawa. Photo credit: THE CANADIAN PRESS/Adrian Wyld.

These changes showcase Canada’s efforts to pair existing challenges with existing solutions — initiatives that require relatively low effort. Yet while these digitization efforts streamline administrative processes and reduce administrative burden, application backlogs persist.

While useful, these initiatives focus on efficiencies in IRCC processing for employees. As IRCC evaluates and develops processes, it should prioritize the experience of end users by taking a migrant-centred design approach.

The value of human-centred design

Human-centred design is the practice of putting real people at the centre of any development process, including for programs, policies or technology. It places the end user at the forefront of development so user needs and preferences are considered each step of the way.

To maximize value in technology implementation, IRCC should take a migrant-centred design approach: apply human-centred design principles with migrants treated as the end users. This approach should consider the following suggestions:

  1. Centre migrants in the development of immigration programs, policies and services, and digital and technological tools by seeking migrants’ input in forthcoming and proposed changes.
  2. Take a life-course approach to human and social service delivery by recognizing that “all stages of a person’s life are intricately intertwined with each other.” Government services prioritize Canadian citizens, but a life-course approach understands that all individuals in Canada, regardless of their current immigration status, represent potential Canadian citizens. Government services should implement a life-course approach by prioritizing quality services for migrants, some of which are seniors that will require different government services in the future.
  3. Combat discrimination and bias in developing new immigration technology tools. Artificial intelligence and other advanced digital technologies have the capacity to reproduce biases and discrimination that currently exist in IRCC. Any new technologies must be evaluated to prevent discrimination or bias.

As Canada continues to explore how technology can help streamline and improve the migrant journey, migrant-centred design should be at the forefront of their planning. When we design processes, policies and tools with intended users at the centre, they are more likely to resonate with users.

If Canada wants to be a first-choice migration destination, we need to approach immigration policies — including technology use — as opportunities to empower and encourage migrants.

The metaverse offers challenges and possibilities for the future of the retail industry

Written by Omar H. Fares, Toronto Metropolitan University. Photo credit Shutterstock. Originally published in The Conversation

As technology improves, the potential for retailers to make use of the metaverse will grow.

In 1968, American computer scientist Ivan Sutherland predicted the future of augmented and virtual reality with his concept of the “Ultimate Display. The Ultimate Display relied on the kinetic depth effect to create two dimensional images that moved with its users, giving the illusion of a three-dimensional display.

While the concept of virtual reality only focuses on the creation of three-dimensional environments, the metaverse — a term coined by Neal Stephenson in his 1992 book Snow Crash — is a much broader concept that surpasses this.

While no official definition of the metaverse truly exists, science and technology reporter Matthew Sparkes provides a decent one. He defines the metaverse as “a shared online space that incorporates 3D graphics, either on a screen or in virtual reality.”

Since the term was coined, the idea of the metaverse has remained more of a fictional concept than a scientific one. However, with technological advancements in recent years, the metaverse has become more tangible. Much of the recent hype happened after Mark Zuckerberg made the announcement to rename the Facebook brand to Meta. Many retailers have since jumped aboard the metaverse train.

A white man in a black long-sleeved shirt gestures while speaking.
Meta chief executive Mark Zuckerberg announced Facebook’s name change to Meta in 2021. He said the move reflected the company’s interest in broader technological ideas, like the metaverse. Photo credit Nick Wass/AP Photo.

Nike recently filed multiple trademarks allowing them to create and sell Nike shoes and apparel virtually. JP Morgan opened their first virtual bank branch. Samsung recreated their New York City flagship store in the virtual browser-based platform Decentraland, where they are launching new products and creating events.

While many retailers are capitalizing on the metaverse early, there is still uncertainty about whether the metaverse really is the future of retailing or whether it will be a short-lived fad.

Dispelling metaverse myths

Much of that uncertainty around the metaverse stems from confusion about the technology. While examining the top keyword associations related to the metaverse on Google Trends, I found “what is metaverse” and “metaverse meaning” to be the top phrases customers searched for. To alleviate some of this confusion, it’s important to dispel commonly held myths about the metaverse.

Myth 1: You need a VR headset to access the metaverse

While an optimal experience in the metaverse can be achieved through VR headsets, anyone can access the metaverse through their personal computers. For instance, customers can create their avatars and access the metaverse in Decentraland on screen without a VR headset.

A virtual avatar in a green shirt, black pants, and sneakers standing in a virtual world.
My virtual avatar in Decentraland. Photo credit: (Decentraland Foundation), author provided.

Myth 2: The metaverse will replace real-life interactions

Rather than replacing existing modes of communication, the Metaverse provides a more interactive mode of communication. New technologies always bring about predictions of the end of physical interactions. It’s helpful to compare the metaverse with the rise of smartphones. Smartphones enhance communication by allowing people to interact with their social networks, but have not entirely replaced face-to-face interactions. The metaverse will be the same.

Myth 3: The metaverse is just for gaming

While gaming remains the dominant driver of user involvement with the metaverse (97 per cent of gaming executives believe that gaming is the centre of the metaverse today), it’s not the only activity people can take part in.

In a recent survey, McKinsey & Company asked customers what their preferred activity on the metaverse would be in the next five years. Shopping virtually ranked the highest, followed by attending telehealth appointments and virtual synchronous courses.

Keeping expectations realistic

In its current form, the metaverse lacks the technological infrastructure to deliver on market expectations. It may be appropriate to compare the metaverse with the dot-com bubble between 1995 and 2000 that was caused by speculation in internet-based businesses.

Similarly, there appears to be tremendous hype and expectations around what the technology can deliver in its current form. A recent survey of 1,500 consumers found that 51 per cent of people expect customer service to be better in the metaverse, 32 per cent expect less frustration and anxiety while dealing with customer service agents in the metaverse compared to phone interactions, and 27 per cent expect interactions with metaverse virtual avatar assistants to be more effective than online chat-bots.

While such expectations can appear reasonable, metaverse technology is still in its infancy stage, where the focus remains on developing infrastructure and processes for the future. The unrealistic expectations may potentially lead to a metaverse bubble as reality struggles to meet expectations.

Challenges for retailers

As with any emerging technology, retailers need to be prepared for challenges posed by the metaverse. Some of these challenges include the following:

  • Data security and privacy: With the novelty of metaverse technology and the wealth of personal data collected, the metaverse will be an attractive target for cyber-hackers. New approaches and methods need to be considered for a safe metaverse that customers can trust.
  • Experienced talent: Having the right talent that can create, manage and support experiences in the metaverse needs to be at the forefront of engaging with the technology. However, due to the novelty of the technology, finding such talent will be a challenge.
  • Regulations: With no clear jurisdictions and regulations in place, the safety of virtual spaces in the metaverse may be compromised and end up pushing customers away. Retailers need to ensure these spaces are safe and protected.
  • Managing customers’ expectations: Retailers need to educate their customers about what can currently be done in the metaverse, and what customers should expect from businesses in the metaverse.

Despite these challenges, retailers will still be able to craft novel shopping experiences in the metaverse — it will just require appropriately skilled and qualified people to make it happen. With appropriate planning and preparation, retailers will be able to meet these challenges head-on.

A woman wearing a VR headset standing in a shopping mall.
The metaverse will have the potential to revolutionize the retail industry once the technology is advanced enough. Photo credit Shutterstock.

Opportunities for retailers

As technology improves, the potential uses of the metaverse for retailers will grow. At the moment, the metaverse offers retailers three key opportunities for improving the online shopping experience.

The first is brand exposure. Retailers can expand their presence through virtual billboards and interactive advertisements with less noise, compared to existing online and mobile channels. Cloud Nine, an IT services company, is one of the earliest companies to advertise their services on virtual billboards in Decentraland. Virtual billboard advertising is something marketers should keep in mind.

Secondly, the metaverse offers unique experiences for customers to engage with brands through events, contests, and game-like features. Such experiences could increase loyalty and brand engagement. The Metaverse Fashion Week is an example of how retailers can create unique brand engagement opportunities. Retailers including Tommy Hilfiger, Perry Ellis and Dolce & Gabbana all participated in the pilot experience, leading the wave for immersive and unique customer-brand interactions.

Lastly, the metaverse provides retailers the chance to personalize customer experiences. Similar to how retailers can customize customers’ online experiences through data collection, retailers can tailor customer experiences in the virtual environment. In Meta’s Horizon Worlds, for example, users can create their own virtual worlds, invite friends and customize their own experiences.

Elon Musk’s Twitter Blue fiasco: Governments need to better regulate how companies use trademarks

Written by Alexandra Mogyoros, Toronto Metropolitan University. Photo credit Shutterstock. Originally published in The Conversation.

We often trust corporate logos and symbols without necessarily understanding the legal statutes that govern them. 

Until recently, Twitter’s blue checkmark logo was (for better or worse) a trusted mark of authenticity. But under the façade of democratizing the platform, Elon Musk allowed the blue checkmark to be purchased by anyone — with unsurprisingly chaotic results.

Impersonators soon made use of the blue checkmark, with negative consequences for those brands, companies and public figures who had their Twitter accounts impersonated.

The twitter verification symbol: a blue circle with a white tick in the middle.
Twitter’s blue checkmark informs users that accounts are verified as authentic. The sale of verifications led to widespread impersonations on Twitter. Photo credit: Shutterstock.

After a Twitter Blue account impersonated pharmaceutical company Eli Lilly and announced that insulin would be free, the pharmaceutical giant lost over US $15 billion in market cap. This shines a light on a greater problem in our society: how we trust logos without necessarily understanding the standards or quality behind them.

Musk’s Twitter Blue campaign capitalized on users’ trust and profited from it. The decision highlights the larger problem of consumers relying on logos that appear to be trustworthy, but really provide little to no substantiation. It calls out for better regulation of how social media platforms manage misinformation and disinformation.

Until recently, Twitter’s blue checkmark logo was (for better or worse) a trusted mark of authenticity. But under the façade of democratizing the platform, Elon Musk allowed the blue checkmark to be purchased by anyone — with unsurprisingly chaotic results.

Impersonators soon made use of the blue checkmark, with negative consequences for those brands, companies and public figures who had their Twitter accounts impersonated.

Logos communicate information that consumers trust

Logos are used not just to signal the brand behind a product (like Nike’s swoosh or Starbucks’ siren), they also tell us things about a product, like whether it is certified vegan or gluten free. We don’t necessarily understand what logos, like verification badges or Cineplex’s VENUESAFE logo, are claiming to certify or how, yet we trust them to mean a certain level of safety and authenticity. This trust comes in many forms and can be earned — or acquired — in lots of ways.

Sometimes we trust a logo simply because it uses aesthetic attributes that implicitly signal trustworthiness to us: for example, they might appear like a seal, or checkmark, or make use of words like “verified,” “certified” or “guaranteed.”

Sometimes we trust them because they have websites that explain to us in clear terms the exclusivity of being able to use the mark. Other times, we simply trust the brand or platform making use of those logos and let their goodwill transfer to the symbol in question.

Elon Musk allowing verification badges to simply be bought by anyone is an example of how powerful and misguided trust in logos can be. When people see a logo that seems to verify something, they often make assumptions both about what quality is being promised and the legitimacy of that promise.

From verification badges to loyalty checkmarks

Musk purported to be irked by the “exclusivity” of verification marks. Twitter’s previous verification program began to affirm the identity of some Twitter users in response to problems with impersonation.

He tweeted: “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit. Power to the people! Blue for $8/month.”

But, “verification to anyone willing to pay for it ignores the reasons the existing system was put in place and potentially undermines the overall trust in Twitter that it’s supposed to provide.” Allowing users to buy the blue checkmark logo undermined the trustworthiness it had earned. The same logo suddenly signalled two very different kinds of information and caused confusion.

It didn’t matter that Musk had announced that the blue checkmark’s meaning had been effectively corrupted. Information being available to consumers isn’t always a cure-all in the face of reliance and trust.

Not all logos are regulated equally

The use of these symbols is regulated to varying degrees. While consumer protection prevents us from being outright lied to or misled, these marks are insidious. They don’t necessarily guarantee us anything, yet they command our trust through the implicit standards promised.

Our legal system does not provide substantive oversight into these checkmarks, nor does it adequately recognize the role trust plays in consumers reliance of them. This can cause problems. Two weeks ago, it caused problems for Eli Lilly.

Previously, it has caused problems for those communities for whom empty certification marks promise to help, but does not. It also causes problems for the consumers who trust an ultimately untrustworthy source.

The Twitter symbol on the company's headquarters in San Francisco.
The Twitter symbol on the company’s headquarters in San Francisco. Corporate brands are protected through trademark laws irrespective of how companies behave. Maybe it is time we reconsider that. Photo credit: Jeff Chiu/AP Photo

Logos are essential to brand identity and are extraordinarily valuable assets to their corporate owners. Consequently, brands do not take kindly to having their ability to use their logos limited. Our legal system needs to do better and govern logos through trademark law in a way that more realistically reflects the role they play.

Brands need to be held accountable. They are protected through trademark laws irrespective of how they behave. Maybe it is time we reconsider that.

Musk turning Twitter’s verification badge into a subscription service was wrong, and likely strategically motivated. Musk has recently announced that the Blue Verified will now be re-launched on Nov. 29 to ensure it is “rock solid.”

At the end of the day, the blue checkmark will only be as trustworthy as the brand that stands behind it. Right now, that brand is Elon Musk.

What is the metaverse, and what can we do there?

Written by Adrian Ma, Toronto Metropolitan University. Photo credit: Shutterstock. Originally published in The Conversation.

What will it take for the metaverse to live up to its potential?  

You’ve likely heard recently how the metaverse will usher in a new era of digital connectivity, virtual reality (VR) experiences and e-commerce. Tech companies are betting big on it: Microsoft’s massive US$68.7 billion acquisition of game developing giant Activision Blizzard reflected the company’s desire to bolster its position in the interactive entertainment space.

Prior to this, Facebook’s parent company rebranded itself as Meta — a key pillar of founder Mark Zuckerberg’s grand ambitions to reimagine the social media platform as “a metaverse company, building the future of social connection.”

But other non-tech corporations are clamouring to get in on the ground floor as well, from Nike filing new trademarks to sell virtual Air Jordans and Walmart preparing to offer virtual merchandise in online stores using its own cryptocurrency and non-fungible tokens (NFTs).

As a journalism professor who has been researching the future of immersive media, I agree the metaverse opens up transformative opportunities. But I also see inherent challenges in its road to mainstream adoption. So what exactly is the metaverse and why is it being hyped up as a game-changing innovation?

Entering the metaverse

The metaverse is “an integrated network of 3D virtual worlds.” These worlds are accessed through a virtual reality headset — users navigate the metaverse using their eye movements, feedback controllers or voice commands. The headset immerses the user, stimulating what is known as presence, which is created by generating the physical sensation of actually being there.

To see the metaverse in action, we can look at popular massively multiplayer virtual reality games such as Rec Room or Horizon Worlds, where participants use avatars to interact with each other and manipulate their environment.

But the wider applications beyond gaming are staggering. Musicians and entertainment labels are experimenting with hosting concerts in the metaverse. The sports industry is following suit, with top franchises like Manchester City building virtual stadiums so fans can watch games and, presumably, purchase virtual merchandise.

Perhaps the farthest reaching opportunities for the metaverse will be in online learning and government services.

children using laptops sit at a table with a digital dinosaur hologram in the middle.
The metaverse contains exciting new applications for education at all levels. Photo credit Shutterstock.

This is the popular conception of the metaverse: a VR-based world independent of our physical one where people can socialize and engage in a seemingly unlimited variety of virtual experiences, all supported with its own digital economy.

More than virtual reality

But there are challenges to overcome before the metaverse can achieve widespread, global adoption. And one key challenge is the “virtual” part of this universe.

While VR is considered a key ingredient of the metaverse recipe, entrance to the metaverse is not (and should not) be limited to having a VR headset. In a sense, anyone with a computer or smartphone can tap into a metaverse experience, such as the digital world of Second Life. Offering broad accessibility is key to making the metaverse work based on VR’s continued uphill battle to gain traction with consumers.

The VR market has seen remarkable innovations in a short period of time. A few years ago, people interested in home VR had to choose between expensive computer-based systems that tethered the user or low-cost but extremely limited smartphone-based headsets.

Now we’ve seen the arrival of affordable, ultra high-quality, portable wireless headsets like Meta’s Quest line, which has quickly become the market leader in home VR. The graphics are sensational, the content library is more robust than ever, and the device costs less than most video game consoles. So why are so few people using VR?

On one hand, global sales of VR headsets have been growing, with 2021 being a banner year for headset manufacturers, who had their best sales since 2016’s flurry of big-brand VR device releases. But they still only sold around 11 million devices worldwide.

Getting people to even use their devices can be a challenge, as it’s estimated only 28 per cent of people who own VR headsets use them on a daily basis. As numerous tech critics have pointed out, the VR mainstream revolution that has been promised for years has largely failed to come to fruition.

a woman wearing a vr headset with an outstretched hand.
Virtual reality headsets are increasing in popularity, but there are challenges to their widespread adoption. Photo credit Shutterstock.

Virtual movement, physical discomfort

There are a myriad factors, from missed marketing opportunities to manufacturing obstacles, as to why VR hasn’t caught on in a bigger way. But it’s possible that using VR is inherently unappealing for a significant number of people, particularly for frequent use.

Despite impressive advancements in screen technology, VR developers are still trying to address the “cybersickness” — a feeling of nausea akin to motion sickness — their devices elicit in many users.

Studies have found that neck physical discomfort may present another barrier, which may remain an issue as long as VR requires the use of large headsets. There’s also research to suggest that women experience much higher levels of discomfort because the fit of the headset is optimized for men.

And beyond the physical challenges of using VR is the isolating nature of it: “Once you put on the headset, you’re separated from the world around you,” writes Ramona Pringle, a digital technology professor and researcher.

Certainly, some are drawn to VR to experience heightened escapism or to interact with others virtually. But this disconnection to the physical world, and the uneasy feeling of separation from people, may be a significant hurdle in getting people to voluntarily wear a headset for hours at a time.

Mediated, magical worlds everywhere

Augmented reality (AR) experiences may hold the key for the metaverse to reach its true potential. With AR, users use their smartphone (or other device) to digitally enhance what they perceive in the physical world in real-time, allowing them to tap into a virtual world while still feeling present in this one.

An interview with video games researcher and designer Kris Alexander on the potential of augmented reality.

A metaverse centred on augmented reality wouldn’t be a completely new digital world — it would intersect with our real world. It’s this version of the metaverse that could actually have the ability to change the way we live, argues computer scientist and tech writer Louis Rosenberg:

“I believe the vision portrayed by many Metaverse companies of a world filled with cartoonish avatars is misleading. Yes, virtual worlds for socializing will become quite popular, but it will not be the means through which immersive media transforms society. The true Metaverse — the one that becomes the central platform of our lives — will be an augmented world. If we do it right, it will be magical, and it will be everywhere.” 

This federal election, the Liberals are outspending all the other parties combined when buying ads on Facebook

Written by , Ryerson University; , Ryerson University. Photo credit: THE CANADIAN PRESS/Nathan Denette. Originally published in The Conversation.

Liberal Leader Justin Trudeau, right, leaves the stage with MP candidate Chrystia Freeland after revealing his party’s election platform.

Today, 94 per cent of Canadian adults who use the internet have at least one social media account, and 83 per cent report having a Facebook account. This trend will likely continue as more people turn to the internet and social media to stay connected.

The shift in how and where people spend their time and attention has given rise to a widely adopted practice called microtargeting. Microtargeting is a marketing strategy that relies on using users’ demographic and social media data — the things we “like” on social media, who we are friends with, businesses that we have frequented, etc. — to identify and segment people into narrowly defined small groups in order to show them personalized ads.

In recent years, digital political ad spending has exploded. And in this federal election, the Liberal Party of Canada is outspending all of the other major federal parties combined, while the NDP’s political ads are being shown to Facebook users under 18.

Finely tuned machine

As a platform, Facebook is a finely tuned microtargeting machine. It’s one of the main reasons why political campaigns in places like the United States have been “flooding Facebook with ad dollars.”

This trend is also happening here in Canada. Using Facebook’s Ad Library, we found that between between July 31 and Aug. 29, major political parties in Canada had spent nearly $2.5 million across Facebook, Instagram and Messenger. The federal Liberal Party alone spent $1.5 million on 7,038 ads, far outpacing the combined spending by the other major federal parties.

Amount spent on Facebook ads by major political parties in Canada between July 31 and Aug. 29, 2021. Photo credit: Facebook Ad Library Report.

Analyzing the data

As part of the Social Media Lab’s Election 44 transparency and accountability initiative, we have been tracking Canadian political ad spending on Facebook using PoliDashboard, a data visualization tool designed to help voters, journalists and campaign staffers monitor political discourse in Canada. The dashboard is part of our ongoing research on online engagement and the use of social bots to influence public opinion on issues of national importance, like the elections and the ongoing COVID-19 pandemic.

PoliDashboard is publicly accessible and consists of two main modules. The first is the #CNDPoli Twitter Module which provides near real-time analysis of #CDNPoli public tweets, including detecting the presence of bots or automated accounts. The second is the Facebook Political Ads Module, which collects and analyzes data about political advertisers and the ads they are running on Facebook.

The tool spotlights people and organizations vying for voters’ attention on social media and brings more transparency to online political discourse.

PoliDashboard is a data visualization tool to monitor political discourse in Canada. Photo credit: PoliDashboard/Ryerson University Social Media Lab.

The Facebook Political Ads Module shows information about active and inactive ads involving social issues, elections or politics across Facebook products in Canada and is automatically updated every four hours via the Facebook Ad Library API. The module generates two interactive charts showing all of the ads the advertiser is running, who they are targeting and where in Canada the ad was shown.

PoliDashboard automatically aggregates political ads purchased by an advertiser, displaying how individual advertisers in Canada deploy their ad budget, where the ads are shown and who they are targeting with each ad.

Targeted audiences

According to our analysis of parties’ ad spending on Facebook during the first two weeks of the campaign (Aug. 15 to 28), the Liberals, the Conservatives and the NDP ran most of their ads in the four largest provinces: Ontario, Québec, British Columbia and Alberta, which is to be expected as these are also the most vote-rich provinces. Almost all of the Bloc Québécois’s ads ran in Québec.

Both the Liberals and the NDP largely targeted women Facebook users, while the Conservative Party’s most frequently targeted audience consisted of men. The Bloc mostly targeted men in the 45-64 age group and women 65 and older. These findings are in line with the new survey data from Nanos Research showing that Conservatives are surging with male voters and Liberals with female voters.

Facebook’s Political Ads by the different political parties between Aug. 15 and 28. The national Green Party did not run any ads on Facebook during this period. Photo credit: PoliDashboard/Ryerson University Social Media Lab.

The most striking difference between the ad strategies of the different parties, however, was in the age group of targeted voters. The Liberals frequently targeted their ads towards seniors (especially people 65 and older). So did the Bloc, while the Conservatives aimed for middle-aged voters and the NDP went after younger voters.

PoliDashboard has also revealed that some political ads were shown to people who can not legally vote. Citizens under 18 in Canada cannot vote. Curiously, our data shows that some of the NDP’s political ads were shown to Facebook users under 18. Out of 334 ads run by the NDP, 46 of their ads were shown to Facebook users between 13-17 years old over 75,000 times.

However, without additional data about the targeting criteria used for these 46 ads, it is not possible for us to know why they were shown to underage users. It does not appear as though this underage group was specifically targeted by the party, since the same ads were shown to other age groups.

Behind the curtain

We are now aware of who parties are targeting with their ads. These glimpses into who is vying for voters’ attention on Facebook is a keen reminder of the fact that much of how Facebook functions is still a mystery to the public.

As more campaigns turn to social media to reach voters, the lack of transparency in digital political advertising and the role of algorithms in microtargeting raises many questions about accountability and transparency in our democratic processes.

At a minimum, transparency should include information about the criteria that political advertisers and Facebook use for targeting each ad. Without such information, it will be very difficult for political opponents, watchdog groups and election regulators to catch and flag falsehoods or engage in counterspeech.

The Taliban may have access to the biometric data of civilians who helped the U.S. military

Written by , Ryerson University. Photo credit: AP Photo/Rahmat Gul. Originally published in The Conversation.The Conversation

Taliban fighters stand guard at a checkpoint in Kabul, Afghanistan, on Aug. 18, 2021.

In 2007, the United States military began using a small, handheld device to collect and match the iris, fingerprint and facial scans of over 1.5 million Afghans against a database of biometric data. The device, known as Handheld Interagency Identity Detection Equipment (HIIDE), was initially developed by the U.S. government as a means to locate insurgents and other wanted individuals. Over time, for the sake of efficiency, the system came to include the data of Afghans assisting the U.S. during the war.

Today, HIIDE provides access to a database of biometric and biographic data, including of those who aided coalition forces. Military equipment and devices — including the collected data — are speculated to have been captured by the Taliban, who have taken over Afghanistan.

This development is the latest in many incidents that exemplify why governments and international organizations cannot yet securely collect and use biometric data in conflict zones and in their crisis responses.

Building biometric databases

Biometric data, or simply biometrics, are unique physical or behavioural characteristics that can be used to identify a person. These include facial features, voice patterns, fingerprints or iris features. Often described as the most secure method of verifying an individual’s identity, biometric data are being used by governments and organizations to verify and grant citizens and clients access to personal information, finances and accounts.

According to a 2007 presentation by the U.S. Army’s Biometrics Task Force, HIIDE collected and matched fingerprints, iris images, facial photos and biographical contextual data of persons of interest against an internal database.

In a May 2021 report, anthropologist Nina Toft Djanegara illustrates how the collection and use of biometrics by the U.S. military in Iraq set the precedent for similar efforts in Afghanistan. There, the “U.S. Army Commander’s Guide to Biometrics in Afghanistan” advised officials to “be creative and persistent in their efforts to enrol as many Afghans as possible.” The guide recognized that people may hesitate to provide their personal information and therefore, officials should “frame biometric enrolment as a matter of ‘protecting their people.’”

Inspired by the U.S. biometrics system, the Afghan government began work to establish a national ID card, collecting biometric data from university students, soldiers and passport and driver license applications.

Although it remains uncertain at this time whether the Taliban has captured HIIDE and if it can access the aforementioned biometric information of individuals, the risk to those whose data is stored on the system is high. In 2016 and 2017, the Taliban stopped passenger buses across the country to conduct biometric checks of all passengers to determine whether there were government officials on the bus. These stops sometimes resulted in hostage situations and executions carried out by the Taliban.

Placing people at increased risk

We are familiar with biometric technology through mobile features like Apple’s Touch ID or Samsung’s fingerprint scanner, or by engaging with facial recognition systems while passing through international borders. For many people located in conflict zones or rely on humanitarian aid in the Middle East, Asia and Africa, biometrics are presented as a secure measure for accessing resources and services to fulfil their most basic needs.

In 2002, the United Nations High Commissioner for Refugees (UNHCR) introduced iris-recognition technology during the repatriation of more than 1.5 million Afghan refugees from Pakistan. The technology was used to identify individuals who sought funds “more than once.” If the algorithm matched a new entry to a pre-existing iris record, the claimant was refused aid.

An Afghan internally displaced refugee receives winter necessities from the UNHCR in 2017. Photo credit: AP Photo/Rahmat Gul.

The UNHCR was so confident in the use of biometrics that it altogether decided not to allow disputes from refugees. From March to October 2002, 396,000 false claimants were turned away from receiving aid. However, as communications scholar Mirca Madianou argues, iris recognition has an error rate of two to three per cent, suggesting that roughly 11,800 claimants out of the alleged false claimants were wrongly denied aid.

Additionally, since 2018, the UNHCR has collected biometric data from Rohingya refugees. However, reports recently emerged that the UNHCR shared this data with the government of Bangladesh, who subsequently shared it with the Myanmar government to identify individuals for possible repatriation (all without the Rohingya’s consent). The Rohingya, like the Afghan refugees, were instructed to register their biometrics to receive and access aid in conflict areas.

The UNHCR collects the biometric data of refugees in Uganda.

In 2007, as the U.S. government was introducing HIIDE in Afghanistan, U.S. Marine Corps were walling off Fallujah in Iraq to supposedly deny insurgents freedom of movement. To get into Fallujah, individuals would require a badge, obtained by exchanging their biometric data. After the U.S. retreated from Iraq in 2020, the database remained in place, including all the biometric data of those who worked on bases.

Protecting privacy over time

Registering in a biometric database means trusting not just the current organization requesting the data but any future organization that may come into power or have access to the data. Additionally, the collection and use of biometric data in conflict zones and crisis response present heightened risks for already vulnerable groups.

While collecting biometric data is useful in specific contexts, this must be done carefully. Ensuring the security and privacy of those who could be most at risk and those who are likely to be compromised or made vulnerable is critical. If security and privacy cannot be ensured, then biometric data collection and use should not be deployed in conflict zones and crisis response.

As cyberattacks skyrocket, Canada needs to work with — and not hinder — cybersecurity experts

Written by , Ryerson University; , Ryerson University. Alexandre Debiève/Unsplash. Originally published in The Conversation.

Cyberattacks are on the rise, impacting people, systems, infrastructures and governments with potentially devastating and far-reaching effects. Most recently, these include the massive REvil ransomware attack and the discovery that the Pegasus spyware was tracking more than 1,000 people.

A common cause of cyberattacks involves the exploitation of security vulnerabilities. These are conditions or behaviours that can enable the breach, misuse and manipulation of data. Examples can include poorly written computer code or something as simple as failing to install a security patch.

Exploiting vulnerabilities

There can be particularly significant impacts when attackers exploit security vulnerabilities involving digital systems used by federal governments.

For example, in July 2015, the United States Office of Personnel Management announced that malicious hackers had exfiltrated highly sensitive personal information and fingerprints of roughly 21.5 million federal workers and their associates, due to a string of poor security practices and system vulnerabilities.

The massive data breach served as a wake-up call for the U.S. federal government. Barack Obama’s administration consequently announced the Department of Defense would be responsible for storing federal employee data.

Not long after that, the “Hack the Pentagon” pilot program was announced, where the U.S. government invited external experts to responsibly report security flaws.

In 2016, the Pentagon announced a program to help them identify security vulnerabilities.

This pilot paved the way for what has become a standard security practice used by the U.S. government. Since 2020, all American federal agencies have been required to enable the disclosure of security vulnerabilities.

Canada lagging behind

By comparison, our recent report found that the government of Canada is lagging behind countries like the U.S. by failing to welcome vulnerability reports from external experts.

We haven’t had an attack the size of the Office of Personnel Management breach in the U.S., but we aren’t immune either.

Consider the Equifax breach in 2017, when 19,000 Canadians were affected when attackers exploited a security vulnerability in an online customer portal.

In August 2020, the Canada Revenue Agency locked more than 5,000 user accounts due to cyberattacks partially enabled by the agency’s lack of two-factor authentication.

Our report, published through the Cybersecure Policy Exchange at Ryerson University, is the first publicly available research that examines how Canada treats the reporting of security flaws in comparison to other countries.

We discovered that while 60 per cent of G20 members have distinct and clear processes for reporting security vulnerabilities in public infrastructure, Canada does not.

There are four rows in a column of three and on the far right are procedures highlighted in red for disclosing security vulnerabilities. Next to it are flags which represent the countries who do these procedures correctly, while Canada on the far end has red boxes with a white x in each.
When assessing whether the Government of Canada meets standards for vulnerability disclosure in comparison to G20 members, we discovered that Canada is falling behind its peers. Photo Credit: Cybersecure Policy Exchange/ Ryerson University

Cybersecurity experts can disclose “cyber incidents” to the Canadian Centre for Cyber Security. But this term is defined so narrowly that it excludes vulnerabilities that have not yet been weaponized.

And while the United Kingdom and the U.S. governments have promised to make efforts to fix security flaws that are reported, the Canadian Centre for Cyber Security has made no such promise.

By not supporting and protecting security researchers in identifying vulnerabilities, these gaps ultimately put Canada and Canadians at greater risk.

Vulnerable systems, vulnerable people

Cybersecurity experts can face significant legal risks when they report security flaws to the Canadian government. Computer hacking is prohibited by the Criminal Code, and in certain circumstances by laws like the Copyright Act.

A table which has three columns in blue and white. On the far left there are security research activities and next to it are the laws someone could be charged with. In the last column there are summaries on what this law means.
Some of the legal risks in Canada for discovering and disclosing security vulnerabilities found in software and hardware. Photo Credit: Cybersecure Policy Exchange/ Ryerson University

But unlike in the Netherlands and the U.S., there is no legal framework here for reporting security vulnerabilities in good faith.

Canada’s current approach has a chilling effect on the disclosure of security weaknesses found not only in government systems, but also for all software and hardware.

This approach largely leaves cybersecurity researchers in the dark about whether — and how — they should notify the government when they spot security flaws that could be exploited.

A cybersecure Canada requires working with experts who identify the security risks faced by our institutions and infrastructure.

It’s not too late for the federal government to institute a process allowing experts to report security flaws, and to draw on best practices while doing so.

Our work outlines the importance of defining who can submit vulnerability reports, and describes what the reporting and fixing process can look like. It’s important to credit or recognize the experts who disclosed. The public should be given information about vulnerabilities and the solutions required to fix them.

On a dark blue background there is text written in white letters which reads Phases of Vulnerability Disclosure and underneath a women sits at a peach coloured desk. Surrounding her are peach coloured text boxes describing the phases of disclosing vulnerability.
The phases of vulnerability disclosure: discovery, reporting, validation and triage, developing a solution, applying that solution, and informing the public. Photo Credit: Cybersecure Policy Exchange/ Ryerson University

Imperative improvements

Cybersecurity experts are “a significant but underappreciated resource” when it comes to reducing security risks of government systems. They want to help.

The Canadian government needs to implement clearer processes and policies to foster co-operation with cybersecurity experts working in the public interest.

As cyberattacks grow in frequency, scale and sophistication, better cybersecurity practices in Canada are not just desirable — they are imperative.