OpenAI’s new generative tool Sora could revolutionize marketing and content creation

Written by Omar H. Fares, Toronto Metropolitan University. Originally published in The Conversation.

Sora could serve as a tool that enhances the capabilities of content creators, allowing them to produce higher-quality content more efficiently. (Shutterstock)

OpenAI’s new generative Sora tool has sparked lively technology discussions over the past week, generating both enthusiasm and concern among fans and critics.

Sora is a text-to-video model that significantly advances the integration of deep learningnatural language processing and computer vision to transform textual prompts into detailed and coherent life-like video content.

In contrast to previous text-to-video technologies, like Meta’s Make-A-Video, Sora is able to overcome limitations related to the type of visual data it can interpret, video length and resolution.

From what OpenAI has demonstrated, Sora can generate videos of various lengths, from short clips to full-minute narratives, and in high definition, accommodating a wide range of creative needs.

Although no official release date has been announced, Sora will likely be available to the public in the coming months, judging by OpenAI’s typical pattern of public releases. For now, it’s only available to experts and a few artists and filmmakers.

How Sora works

At the heart of Sora’s innovation is a technique that transforms visual data into a format it can easily understand and manipulate, similar to how words are broken down into tokens for AI processing by text-based applications.

This process involves compressing video data into a more manageable form and breaking it down into patches or segments. These segments act like building blocks that Sora can rearrange to create new videos.

Sora uses a combination of deep learning, natural language processing and computer vision to achieve its capabilities.

Deep learning helps it understand and generate complex patterns in data, natural language processing interprets text prompts to create videos, and computer vision allows it to understand and generate visual content accurately.

By employing a diffusion model — a type of model that’s particularly good at generating high-quality images and videos — Sora can take noisy, incomplete data and transform it into clear, coherent video content.

Sora’s approach differs from CGI character creation, which requires extensive manual effort, and from traditional deepfake technologies, which often lack ethical safeguards, by offering a scalable and adaptable method for generating video content based on textual input.

What does this mean for businesses?

One of the most noteworthy aspects of Sora is its flexibility, as it supports various video formats and sizes, enhances framing and composition for a professional finish, and accepts text, images or videos as prompts for animating images or extending videos.

The emergence of Sora presents key opportunities for businesses across different sectors. In the near future, there are two key areas that may have significant applications.

The first area is in marketing and advertising. Just as ChatGPT has become a marketing and content creation tool, we can expect businesses to use Sora for similar reasons.

With the public release of Sora, brands and companies will be able to create highly engaging and visually appealing video content for marketing campaigns, social media and advertisements.

The ability to generate custom videos based on textual prompts will allow for greater creativity and personalization, possibly helping brands stand out in a crowded market.

The second area Sora could impact is training and education. Companies could use Sora to develop educational and training videos that are tailored to specific topics or scenarios. This could enhance the learning experience for employees and customers, making complex information more accessible and engaging.

Other sectors, such as e-commerce, also hold promising potential for the future application of Sora. Retailers could create dynamic product demonstrations that effectively showcase products in a more engaging and interactive manner.

This would be especially beneficial for companies that want to highlight specific aspects of products that might not be easily conveyed through static images or text, or for advertising products that require a detailed explanation.

Sora could also significantly reduce the uncertainty associated with online shopping by facilitating virtual try-on experiences, allowing customers to visualize how a product, such as clothing or accessories, would look on them without the need for a physical fitting. This, in turn, could result in a better return on investment.

What are the key challenges ahead?

While there are key opportunities ahead, OpenAI, regulators and users need to carefully consider key factors that could pose challenges, including copyright issues, ethical concerns and the consequences of increased digital noise.

With Sora’s ability to generate lifelike video content, there’s a risk of inadvertently creating videos that infringe on existing copyrights. OpenAI has already been sued several times over copyright infringement and intellectual property issues.

OpenAI hasn’t disclosed where the data used to train Sora is from, but it did tell the New York Times it was training the system using videos that were publicly available and licensed from copyright holders.

The technology also raises ethical questions, particularly around the creation of deepfake videos or misleading content.

Establishing guidelines and safeguards to prevent misuse will be essential for maintaining trust in the technology. In a post on its website, OpenAI stated it was working with experts to test the model before releasing it to the public.

As more businesses and individuals gain access to Sora, there’s a potential for an increase in low-quality or irrelevant video content, leading to increased “digital noise” that could overwhelm users. Finding ways to filter and curate content will become increasingly important for businesses looking to maintain their edge.

Last, but certainly not least, is the question of how Sora will impact the job market for content creators. While Sora does have the potential to automate certain aspects of video production, like ChatGPT, it’s unlikely to replace human creativity and insight anytime soon.

Instead, Sora could serve as a tool that enhances the capabilities of content creators, allowing them to produce higher-quality content more efficiently. As with any technological advancement, the key will be for professionals to adapt and find ways to integrate Sora into their workflows, leveraging its strengths to complement their own skills and creativity.

The first Neuralink brain implant signals a new phase for human-computer interaction

Written by Omar H. Fares, Toronto Metropolitan University. Originally published in The Conversation.

Neuralink is developing devices that enable direct communication between the human brain and computers. (Shutterstock)

The first human has received a Neuralink brain chip implant, according to co-founder Elon Musk. The neurotechnology company has started its first human trial since receiving approval from the U.S. Food and Drug Administration in 2023.

The trial’s focus is on an implant that could potentially allow people with severe physical disabilities to control digital devices using their thoughts. The study involves implanting a brain chip — called a brain-computer interface implant — in the region of the brain that controls movement intention.

Musk has said the patient who received the implant — fittingly named Telepathy — is “recovering well” and that “initial results show promising neuron spike detection.” No other details about the trial have been provided yet.

This development is more than just a technical milestone; it represents a major leap in potential human-computer interaction, raising important questions about the integration of advanced technology with the human body and mind.

Neuralink’s mission

Neuralink’s stated mission is to “create a generalized brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.” This mission communicates two key approaches.

In the short term, the focus will be on individuals with medical needs. The long-term vision extends far beyond this, alluding to a goal of augmenting human potential. This suggests Neuralink envisions a future where its technology transcends medical use and becomes a tool for cognitive and sensory enhancement in the general population.

The evolution of Neuralink presents a range of possible future scenarios. The first scenario envisions successful trials leading to adoption in niche markets, signifying a breakthrough but with restricted scope.

The second, more optimistic scenario, involves widespread acceptance after successful human trials, with the potential to revolutionize our interaction with technology. And the third — a more pessimistic view — considers the venture’s failure, driven by many societal, technological, legal and medical factors.

The realistic scenario

In the most realistic scenario, Neuralink is expected to achieve success by focusing on medical applications for individuals with severe disabilities. This targeted approach is likely to resonate with consumers in need of life-changing technologies, which will drive early adoption within this specific demographic.

In this case, wider acceptance from the broader consumer base will hinge on various factors, including the technology’s perceived usefulnessprivacy implications and the overall risk-benefit perception.

Socially, Neuralink’s trajectory will be significantly influenced by public and ethical discussions. Issues surrounding data security, long-term health implications and equitable access will likely dominate public discourse.

Widespread acceptance of Neuralink’s technology will depend on its medical efficacy and safety, combined with Neuralink’s ability to address ethical concerns and gain public trust.

The optimistic scenario

In the optimistic scenario, Neuralink’s technology transcends its initial medical applications and integrates into everyday life. This scenario envisions a future where the technology’s benefits are clearly demonstrated and recognized beyond its medical use, generating interest across various sectors of society.

Consumer interest in Neuralink would extend beyond those with medical needs, driven by the appeal of enhanced cognitive abilities and sensory experiences. As people become more familiar with the technology, concerns about invasiveness and data privacy may decrease, especially if Neuralink can provide robust safety and security assurances.

From a societal standpoint, the optimistic scenario sees Neuralink as a catalyst for positive change. The technology could bridge gaps in human potential, offering new ways of interaction and communication.

A middle-aged man in a suit gestures while speaking.
Elon Musk, co-founder of Neuralink, speaking at VivaTech, one of Europe’s largest tech and start-up fairs, in June 2023 in Paris, France. (Shutterstock)

Although ethical concerns would still exist, the potential benefits in education, workforce productivity and overall quality of life could outweigh them. Regulatory bodies might adopt more accommodating policies, influenced by public enthusiasm and the technology’s track record in improving lives.

In this scenario, Neuralink becomes a symbol of human advancement, seamlessly integrating into daily life and opening new possibilities in human-machine interaction.

Its success would set a precedent for other technologies at the intersection of biology and technology, like gene editing technologies  and bioelectronic medicine, paving the way for a future where such integrations are the norm.

The pessimistic scenario

In the pessimistic scenario, Neuralink will face significant challenges that hinder its widespread adoption and success. This scenario considers the possibility of the technology failing to meet the high expectations set for it, either due to technological limitations, safety concerns or ethical dilemmas.

From a technological standpoint, the complexity of interfacing directly with the human brain could be more complex than anticipated, leading to underwhelming performance or reliability issues.

Physical and psychological safety concerns might also be more significant than initially thought, with potential long-term health implications that could deter both consumers and medical professionals.

The invasive nature of the technology and privacy concerns related to brain data could lead to widespread public apprehension. This skepticism could be compounded if early applications of the technology are perceived as benefiting only a select few, exacerbating social inequalities.

Ethically, the prospect of brain-computer interfaces could raise questions about human identityautonomy and the nature of consciousness. These concerns might fuel public opposition, leading to stringent regulatory restrictions and slowing down research and development.

In this scenario, Neuralink’s ambitious vision might be curtailed by a combination of technological hurdles, public mistrust, ethical controversies and regulatory challenges, ultimately leading to the project’s stagnation or decline.

While Neuralink presents numerous possibilities, its journey isn’t merely about technological advancement. The outcome of this venture holds key implications for the future of neural interfaces and our understanding of human capabilities, underscoring the need for a thoughtful approach to such innovation.

The Conversation

Bike and EV charging infrastructure are urgently needed for a green transition

Written by Deborah de Lange, Toronto Metropolitan University. Photo credit THE CANADIAN PRESS/Jonathan Hayward. Originally published in The Conversation.

Canada should invest in sustainable transportation infrastructure to accelerate the green transition.

The green transition is happening too slowly. We are in a climate emergency, and it is clear that we need to reduce greenhouse gas emissions by transitioning to more sustainable transportation.

However, without sufficient infrastructure to enable electric vehicles (EVs) or cycling for commuting, these options will remain too inconvenient or unsafe for most. Canada’s climate obligations will not be met without these infrastructure changes.

We just experienced the hottest July on record. We cannot burn more carbon, no matter the remaining carbon budget. Climate disasters around the world today are dictating timelines now. Meanwhile, gas cars are needlessly on city streets, adding to traffic congestion and pollution while urban sprawl means gas car driving habits expand.

Canada requires urgent investment in transport infrastructure and incentives to reverse this trend.

Policy breakdowns

Here in Toronto, a recent mayoral election provided a platform for two candidates who made election promises to close down cycling lanes. Meanwhile, a lack of high-quality cycling infrastructure in the city incentivizes travel by car to the detriment of the city’s happiness and carbon budget.

This stands in stark contrast to a city like Copenhagen, Denmark where 62 per cent of people commute by cycling. A city which, by some metrics, may also be the happiest in the world.

A blue two-wheeled bicycle is locked to a pole outside and is covered in ice.

Canada currently has both a lack of cycling infrastructure and reliable bike storage options. Photo credit: THE CANADIAN PRESS/Justin Tang

Closer to home, cycling infrastructure remains poor and bike theft rose by 429 per cent in Canada this summer. However, the solutions to this problem, such as bicycle lockers, are not widely enough installed and where they do exist, they are only for regular users and require a reservation and monthly payments.

Solutions such as an on-demand bicycle storage system being piloted in Vancouver and the Vancouver City Centre Bike Valet show promise for nation-wide implementation but will require effort to implement at scale.

Nowhere to charge

Likewise, a recent survey says that Canadians are not switching to cleaner EVs partly because of a lack of charging infrastructure. In a climate emergency, bike and electric vehicle infrastructure should have been installed long ago.

Toronto’s mandate is to reach net zero by 2040, but its efforts pale in comparison to the actions of other cities in Canada and around the world.

A variety of incentives and legislation are accelerating an EV transition including fee exemptions, grants and mandated targets. Brazil is proposing that all gas stations offer EV charging.

An aerial view of a parking lot shows six parking spots, three of which are for electric vehicles.

A lack of enthusiasm for EVs in Canada is driven largely by a lack of reliable charging infrastructure. Photo credit: THE CANADIAN PRESS/Adrian Wyld

Ireland’s zero emissions office is aiming for 100 per cent of new car sales to be EVs by 2030. France supports EV purchases with funding and bonuses for low income individuals. Ecuador’s public transport will be 100 per cent electric by 2025 and Sweden’s government fleet will be electrified by 2035. Colombia and South Africa are setting EV charging infrastructure minimums.

There are notable Canadian EV initiatives in Québec and British Columbia. Québec has ambitious electrification plans including expanding EV charging, funding further vehicle electrification across the province. B.C. is improving upon the Canadian national mandate by installing more EV charging stations and planning a changeover to clean vehicles.

In contrast, Ontario and Toronto are without any unique innovations in electric vehicle infrastructure or policy.

An electric future

EVs are already addressing local air pollution around the world and reducing health issues such as asthma. Higher EV sales are also associated with higher human development indexes (HDI). An HDI is a national measure of wealth, and a good reflection of standard of living, including health and education. Countries with higher EV sales also tend to lead worldwide in the development of environmental inventions. Healthier inventions make a better life.

Perhaps in Sweden, France, The Netherlands, Germany, Japan, Norway and certain Canadian provinces such as Québec and B.C., the connection is clearer between switching to cleaner technologies and increasing levels of personal health and happiness. Improving education is a catalyst for change.

An electric vehicle charging station from electrify Canada

An example of an electric charging station designed around a familiar gas pump form factor. Photo credit: THE CANADIAN PRESS/Doug Ives

If Canada is to meet its climate commitments, it has to drastically reduce greenhouse gas emissions from transportation. Infrastructure investments, such as for EVs and cycling, improve our quality of life and the economy at the same time. Building infrastructure is a classic approach to boosting an economy. It is also a green economic opportunity if the right choices are made.

Canada can start by applying well-known policy solutions and rapidly installing infrastructure nationwide. Studies have validated this recommendation and additional phased-in electrical grid capacity is neither controversial nor impractical. Emissions reductions with EVs as compared to gas cars, no matter the energy fuel source, ultimately validate EVs green utility over gas-powered cars.

Around the world, such as in China where they have energy mix variations across regions including coal, EVs make sense. Emissions reductions for Ontario have been calculated at around 80 per cent when EVs are driven.

The International Energy Agency offers a comprehensive policy database of worldwide examples for places like Toronto that are lagging on clean transportation transition policy and change. Beyond benchmarking, Canada could strive for leadership on the world stage by investing in university research and applying ambitious initiatives across the country.

Canada has an opportunity that should not be missed to stimulate its economy by investing in sustainable transportation infrastructure to accelerate the green transition.

The shift from owning to renting goods is ushering in a new era of consumerism

Written by Omar H. Fares, Toronto Metropolitan University, and Seung Hwan (Mark) Lee, Toronto Metropolitan University. Photo credit: Shutterstock. Originally published in The Conversation

Instead of owning physical copies of DVDs or CDs, for example, people subscribe to streaming services, allowing them to access a wide range of products without the burden of traditional ownership.

Today’s consumer landscape is witnessing a pivotal shift away from traditional ownership towards an access-based model. Rather than outright owning goods and services, people prefer to simply have access to them.

Access-based consumption means engaging in transactions where ownership doesn’t change hands. Instead of owning physical copies of DVDs or CDs, for example, people subscribe to streaming services. Consumers are able to access a wide range of products without the burden that comes with traditional ownership.

This approach is closely associated with the sharing economy, which encourages collaborative consumption. This involves sharing, swapping and renting resources, eliminating the need for personal ownership of these goods.

The term “sharing economy” came into use after the 2007 financial crisis as people sought alternative ways to access goods and services, but started gaining more widespread usage in 2010 and 2011.

The sharing economy is growing exponentially. It’s projected to reach a market volume of $335 billion by 2025. This indicates that the way we consume goods and services has — and continues to — evolve significantly.

A response to global challenges

At a time filled with economic instability driven by a wealth of factors, including the long-lasting effects of COVID-19 and the war in Ukraine, consumers continue to shift their consumption habits to align with these economic shocks.

The access-based and sharing economy has emerged as a powerful response to these global challenges, offering a flexible, cost-effective and more sustainable alternative to the long-standing paradigm of ownership.

A phone screen displaying music streaming apps
Music streaming services allow people to access a wide variety of music without actually owning any physical copies of CDs or records. Photo credit: AP Photo/Jenny Kane.

The rise of access-based consumption doesn’t appear to be a passing phase. Rather, it appears to be an enduring form of consumption that is emerging in various industries, including transportation, fashion and toys.

Navigating the current economic landscape requires a solid grasp of these evolving paradigms. The rise of the access-based and sharing economy is more than a trend towards cost saving; it’s about constructing a sturdier, sustainable consumption model.

What is driving the shift

The growth of access-based consumption is driven by two main things. First, access-based consumption is predicated on the affordability, value and convenience it offers to consumersParticipation in car-sharing services, such as Zipcar and Turo, are primarily driven by these factors.

Secondly, access-based consumption provides environmental and social benefits by encouraging consumers to share and increasing the usage of a particular good.

In the fashion industry, rental services allow consumers to enjoy a variety of choices and gain access to luxury goods they may not otherwise be able to purchase. These services are also beneficial for those experiencing body changes, like pregnant women, as clothing can be shared to reduce careless disposal.

Access-based consumption means there is a time-related aspect to the transaction, either in the form of duration of access or usage. Even so, this doesn’t stop consumers from developing a sense of perceived ownership over a good or service.

Two small cars parked on the street outside a business with a Zipcar logo posted in the window
The growth of ride-sharing services like Zipcar has largely been attributed to the affordability, value and convenience they provide to consumers. Photo credit: Shutterstock.

For example, consumers may develop a sense of pride, attachment and responsibility towards a shared community garden. They may gain social value from participating in this experience.

This social component also extends to peer-to-peer accommodation services, like Airbnb. One study found that the primary reasons American travellers used such a service included sustainability and connecting with community.

Interestingly, while service providers tout intrinsic motivations, such as promoting sustainability and building a community, users often have extrinsic factors such as affordability and convenience on top of their minds.

What does this mean for businesses?

Businesses need to reimagine traditional profit strategies, resource utilization, societal impacts and community relationships to better adapt to this shift in the economic paradigm.

Rethink profit: In an access-based economy, businesses need to shift their profit strategies from selling products to facilitating accessThis calls for innovative approaches to monetizing services, such as tiered subscriptions, dynamic pricing or pay-per-use approaches, creating multiple revenue streams while fulfilling diverse consumer needs.

Maximizing technological resources: The role of technology is central in orchestrating transactions, maintaining inventory and ensuring a seamless user experience. In an access-based environment, businesses must harness tech advancements like AI, data analytics and the Internet of Things to streamline operations. Investing in digital infrastructure is critical to success in the access-based economy.

Beyond revenue: Profit isn’t the sole aim anymore. The access-based economy focuses on sustainable practices and societal impact. Businesses can position themselves as conscious brands by promoting resource optimization and contributing to societal and communal welfare. This shift towards corporate social responsibility not only elevates a brand’s image, but also resonates with the growing consumer demand for ethical consumption.

The power of trust: Trust is one of the cornerstones of the access-based economy. Consumers need the assurance of safety, quality and reliability before partaking in sharing transactions. Businesses can foster trust by implementing transparent practices, rigorous quality checks and responsive customer service.

The future of consumerism

While ownership does offer consumers unique benefits, including enhanced autonomy and a stronger sense of consumer identity, it’s clear we are shifting away from this model.

As consumers and businesses navigate and adapt to this new landscape, we are not just witnessing a change in how we consume, but in how we perceive value, community and our roles within it.

This dynamic shift towards an access-based model, fuelled by intrinsic and extrinsic motivations, is driven by the idea of a shared future built on access to goods and services, improved efficiency and collective value.

ChatGPT and Threads reflect the challenges of fast tech adoption

Written by Omar H. Fares, Toronto Metropolitan University, and Seung Hwan (Mark) Lee, Toronto Metropolitan University. Photo credit: AP Photo/Richard Drew. Originally published in The Conversation

Meta’s Threads platform experienced a significant drop in users recently. 

ChatGPT recently experienced a decline in user engagement for the first time since its launch in November 2022. From May to June, engagement dropped 9.7 per cent, with the largest decline — 10.3 per cent — occurring in the United States.

Meanwhile, Meta’s Threads platform experienced a significant drop in user numbers, going from more than 49 million users on July 7 to 23.6 million active users by July 14. In the same time frame, the average time users in the U.S. spent on the app dropped from a peak of 21 minutes in early July to just above six minutes.

In the tech world, companies are always racing to be the first ones to introduce new innovations, aiming for the “first mover’s advantage.” This refers to a firm’s ability to get a head start over competitors by being the first to enter a new product category or market.

However, being a trailblazer doesn’t guarantee an easy ride. While there are perceived benefits, there are also a plethora of challenges that arise.

A news story about what the drop in Meta Threads engagement means for the social media app.

The recent declines of Threads and ChatGPT attest to this reality, demonstrating that rapid and widespread acceptance doesn’t necessarily lead to long-term success.

There are a few reasons why a fast adoption isn’t necessarily the key to success including unsustainable growth, inadequate scaling infrastructure and a lack of user retention strategies.
Unsustainable growth

The idea of unsustainable growth stems from a platform’s inability to uphold or maintain the quality of the user experience while scaling up at a rapid pace.

This is where the real challenge lies: being able to effectively scale up a product or service. It is precisely at this junction that the concept of unsustainable growth intersects with the Gartner Hype Cycle.

The Gartner Hype Cycle is a model that shows the stages of emerging technology adoption: from the initial hype and inflated expectations, through disillusionment and skepticism, to practical and mainstream productivity.

 

A line graph illustrating that Threads and ChatGPT both had a period of significant hype and inflated expectations, followed by a drop in user interest.
A graph illustrating how ChatGPT and Threads fit into the Gartner Hype Cycle. Image credit: Omar H. Fares and Seung Hwan Lee/Author provided.

In the context of unsustainable growth, products like ChatGPT and Threads appear to have reached the stage known as “peak of inflated expectations,” where the publicity of a new product generates over-enthusiasm and unrealistic expectations. During this stage, users rapidly adopt the product due to its novelty and the hype surrounding it.

However, this stage often leads to the “trough of disillusionment.” During this stage, the product fails to meet users’ unrealistic expectations, causing a decline in their interest.

It indicates the product’s growth may have outpaced its ability to provide an excellent user experience. Without enhancing the product based on user feedback, declining user engagement will ensue.

This rise and fall underscores the challenge of achieving sustainable growth in the face of rapid adoption. The initial hype often attracts a massive influx of users, but without a clear, scalable strategy for maintaining quality and engagement, platforms can quickly lose their appeal.

Inadequate scaling infrastructure

When a platform’s user base expands at a rapid pace, the question of whether that platform’s infrastructure can scale to the demands of its users becomes critical.

The sudden influx of users that accompanies a successful product launch can be a double-edged sword; it brings a wealth of opportunities for data collection, user feedback and revenue, but also tests the scalability of the platform’s infrastructure.

If the underlying technology, support services or operational strategies are not built to scale, the product might suffer from slow loading times, frequent crashes or a lack of timely customer support — all of which are detrimental to the user experience and a product’s long-term success.

For instance, OpenAI, the company behind ChatGPT, had to limit ChatGPT-4 users to 25 messages every three hours due to infrastructure constraints — even for those with a paid membership. While this helps manage the infrastructure load, it presents a challenge from the user’s perspective.

Users who were accustomed to unlimited interactions with ChatGPT-3 now find themselves paying for a service with limitations. This may inadvertently dampen user engagement and drive some users away, underscoring the delicate balance between managing infrastructure and maintaining user satisfaction.

Lack of user retention strategies

ChatGPT app icon on a phone screen
A news story about what the drop in Meta Threads engagement means for the social media app. Photo credit: AP Photo/Richard Drew.

One reason why tech businesses struggle to retain users is because they don’t prioritize user-centered design. By failing to incorporate user feedback in product development, businesses can end up offering a product that doesn’t meet user needs.

In addition, businesses must provide effective support for users. Insufficient or unclear onboarding may leave users feeling lost and overwhelmed, leading them to abandon the product. In the case of ChatGPT, OpenAI provides a basic explanation of platform usage, but users are primarily responsible for exploring it themselves.

Users experiment with prompts without a clear understanding of how to generate impactful responses, resulting in uncertainty and frustration. This lack of guidance may contribute to lower engagement rates, as observed in the recent decline.

Lastly, increasing concerns about security threats and privacy have raised questions about how new technologies are protecting their users. The conflict between the need for more personalized experiences and privacy can give rise to a phenomenon called the personalization-privacy paradox.

As individuals grow increasingly uneasy about how their personal information is stored, the lack of proper regulations can lead to a decline in the use of personalized services or technologies.

While rapid user adoption is a promising start, it doesn’t guarantee long-term success. Striking the right balance between growth and infrastructure scalability, adopting a user-centric approach, maintaining user trust and investing in continuous innovation are the cornerstones for enduring success in the competitive tech landscape.

Enhancing survey data with high-tech, long-range drones

Gathering geoscientific data for mining and energy industries through ground-level surveys can be time-consuming, costly and physically dangerous. Drones can replace traditional surveying methods, reducing labour and equipment costs and completing surveys more quickly, but they have their own limitations.

To improve the versatility and quality of typical remote-system surveying data, Toronto Metropolitan University (TMU) engineering alumni Robel Efrem (mechanical) and Alexandre Coutu (electrical) teamed up with Sajad Saeedi from TMU’s Department of Mechanical and Industrial Engineering. In partnership with the alumni’s company, Rosor Corp., they develop near-surface, remotely piloted aircraft systems that support multi-datatype output surveying, such as geophysical and topographic, in a single pass.

(From left to right) TMU alumni Alexandre Coutu (BEng Electrical Engineering ‘20), Robel Efrem (MEng Mechanical Engineering ‘23) and professor Sajad Saeedi
(From left to right) TMU alumni Alexandre Coutu (BEng Electrical Engineering ‘20) and Robel Efrem (MEng Mechanical Engineering ‘23), co-founders of Rosor Corp., collaborate with Sajad Saeedi from TMU’s Department of Mechanical and Industrial Engineering

These long-range survey drones will improve geophysical data collection using custom-designed sensor suites, improved low-altitude accuracy, longer stamina and untethered, long-range communication systems. Enhanced geophysical survey data can provide mining investors greater insights into national and global mining resources, help solve demand shortages through mineral discoveries, and support environmental monitoring and developing technologies like electric vehicles.

Funding for this project by Mitacs. To learn more about how Mitacs supports groundbreaking research and innovation, visit the Mitacs website.

Counting carbs with AI for real-time glucose monitoring

Diabetes patients have to monitor their diet’s glucose levels closely to avoid serious health complications. While some tools exist to help manage this challenging disease, one that accurately pre-evaluates diabetes patients’ meals and allows for on-the-spot portion adjustments is lacking.

To fill this industry gap and improve the lives of diabetes patients, three Toronto Metropolitan University (TMU) engineering alumni connected with TMU biomedical engineering professor Naimul Khan to develop machine-learning algorithms capable of analyzing 2D food images for 3D depth in real-time. This innovation allows users to snap a photo of their meal and have the carbohydrates counted while they wait, allowing them to adjust their portions or food choices to maintain ideal glucose levels.

Headshots of TMU professor Naimuil Khan and alumni Liam Bell, Osama Muhammad and Muhammed Ashad Khan
Clockwise from top left: Biomedical engineering professor Naimul Khan and alumni Liam Bell (biomedical), Osama Muhammad (mechanical), and Muhammed Ashad Khan (electrical) worked together through Mitacs to improve glucose self-monitoring

Alumni Liam Bell (biomedical), Osama Muhammad (mechanical) and Muhammed Ashad Khan (electrical) are using these algorithms to further develop their smartphone app and accompanying wearable device, Glucose Vision. This technology has the potential to significantly reduce future health issues and the cost burden of diabetes on the Canadian health care system.

Funding for this project provided by Mitacs. To learn more about how Mitacs supports groundbreaking research and innovation, visit the Mitacs website.

Apple’s new Vision Pro mixed-reality headset could bring the metaverse back to life

Written by Omar H. Fares, Toronto Metropolitan University. Photo credit: AP Photo/Jeff Chiu. Originally published in The Conversation

The Apple Vision Pro headset is displayed in a showroom on the Apple campus on June 5, 2023, in Cupertino, Calif.

The metaverse — a shared online space incorporating 3D graphics where users can interact virtually — has been the subject of increased interest and the ambitious goal of big tech companies for the past few years.

Facebook’s rebranding to Meta is the clearest example of this interest. However, despite the billions of dollars that have been invested in the industry, the metaverse has yet to go mainstream.

After the struggles Meta has faced in driving user engagement, many have written off the metaverse as a viable technology for the near future. But the technological landscape is a rapidly evolving one and new advancements can change perceptions and realities quickly.

Apple’s recent announcement of the Vision Pro mixed-reality headset at its annual Worldwide Developers Conference — the company’s largest launch since the Apple Watch was released in 2015 — could be the lifeline the metaverse needs.

About the Vision Pro headset

The Vision Pro headset is spatial computing device that allows users to interact with apps and other digital content using their hands, eyes and voice, all while maintaining a sense of physical presence. It supports 3D object viewing and spatial video recording and photography.

The Vision Pro is a mixed-reality headset, meaning it combines elements of augmented reality (AR) and virtual reality (VR). While VR creates a completely immersive environment, AR overlays virtual elements onto the real world. Users are able to control how immersed they are while using the Vision Pro.

A video from Apple introducing the Vision Pro headest.

From a technological standpoint, the Vision Pro uses two kinds of microchips: the M2 chip, which is currently used in Macs, and the new R1 chip.

The new R1 chip processes input from 12 cameras, five sensors and six microphones, which reduces the likelihood of any motion sickness given the absence of input delays.

The Vision Pro display system also features a whopping 23 million pixels, meaning it will be able to deliver an almost real-time view of the world with a lag-free environment.

Why do people use new tech?

To gain a better understanding of why Apple’s Vision Pro may throw the metaverse a lifeline, we first need to understand what drives people to accept and use technology. From there, we can make an informed prediction about the future of this new technology.

The first factor that drives the adoption of technology is how easy a piece of technology will be to use, along with the perceived usefulness of the technology. Consumers need to believe technology will add value to their life in order to find it useful.

The second factor that drives the acceptance and use of technology is social circles. People usually look to their family, friends and peers for cues on what is trendy or useful.

The third factor is the level of expected enjoyment of a piece of technology. This is especially important for immersive technologies. Many factors contribute to enjoyment such as system quality, immersion experiences and interactive environment.

The last factor that drives mainstream adoption is affordability. More important, however, is the value derived from new technology — the benefits a user expects to gain, minus costs.

Can Apple save the metaverse?

The launch of the Vision Pro seems to indicate Apple has an understanding of the factors that drive the adoption of new technology.

A white man in a polo shirt poses in front of a displayed VR headset
Apple CEO Tim Cook poses for photos in front of a pair of the company’s new Apple Vision Pro headsets in a showroom on the Apple campus on June 5, 2023, in Cupertino, Calif. Photo credit: AP Photo/Jeff Chiu.

When it comes to ease of use, the Vision Pro offers an intuitive hand-tracking capability that allows users to interact with simple hand gestures and an impressive eye-tracking technology. Users will have the ability to select virtual items just by looking at them.

The Vision Pro also addresses another crucial metaverse challenge: the digital persona. One of the most compelling features of the metaverse is the ability for users to connect virtually with one another, but many find it challenging to connect with cartoon-like avatars.

The Vision Pro is attempting to circumvent this issue by allowing users to create hyper-realistic digital personas. Users will be able to scan their faces to create digital versions of themselves for the metaverse.

The seamless integration of the Vision Pro into the rest of the Apple ecosystem will also likely to be a selling point for customers.

Lastly, the power of so-called “Apple effect” is another key factor that could contribute to the Vision Pro’s success. Apple has built an extremely loyal customer base over the years by establishing trust and credibility. There’s a good chance customers will be open to trying this new technology because of this.

Privacy and pricing

While Apple seems poised to take on the metaverse, there are still some key factors the company needs to consider.

By its very nature, the metaverse requires a wealth of personal data collection to function effectively. This is because the metaverse is designed to offer personalized experiences for users. The way those experiences are created is by collecting data.

Users will need assurances from Apple that their personal data and interactions with Vision Pro are secure and protected. Apple’s past record of prioritizing data security may be an advantage, but there needs to be continuous effort in this area to avoid loss of trust and consumer confidence.

Price-wise, the Vision Pro costs a whopping US$3,499. This will undoubtedly pose a barrier for users and may prevent widespread adoption of the technology. Apple needs to consider strategies to increase the accessibility of this technology to a broader audience.

As we look to the future of this industry, it’s clear the metaverse is anticipated to be fiercely competitive. While Apple brings cutting-edge technology and a loyal customer base, Meta is still one of the original players in this space and its products are significantly more affordable. In other words, the metaverse is very much alive.

Smart wearables that measure sweat provide continuous glucose monitoring

Toronto Metropolitan University (TMU) researchers Reza Eslami and Hadis Zarrin have developed non-invasive sensors powered by movement that can determine the blood sugar levels of diabetes patients from their sweat. The researchers aim to revolutionize diabetes management by creating a user-friendly, continuous glucose monitoring (CGM) system that integrates these sensors into clothing and accessories, allowing diabetes patients to self-monitor their glucose levels 24/7.

Self-powered CGM smart wearables could significantly improve diabetes patients’ quality of life by enabling them to regulate their overall blood sugar level and meet glucose targets consistently. In addition, CGMs could play an essential role in predicting the risk of diabetes development before onset.

Chemical engineering PhD candidate Reza Eslami (left) and chemical engineering professor Hadis Zarrin
Chemical engineering PhD candidate Reza Eslami (left) and chemical engineering professor Hadis Zarrin collaborate to develop a user-friendly, continuous glucose monitoring (CGM) system

Zarrin, the principal investigator at TMU-based Nanoengineering Laboratory for Energy and Environmental Technologies (NLEET) and a chemical engineering professor, collaborates with Eslami, a chemical engineering PhD candidate and his start-up, Sensofine, to make this technology widely available. They use machine learning and input from fashion designers to develop smart wearables made of high-performing materials and consider various factors in their design, including accessibility, culture, gender and age.

Funding for this project provided by Mitacs. To learn more about how Mitacs supports groundbreaking research and innovation, visit the Mitacs website.

Gen Z goes retro: Why the younger generation is ditching smartphones for ‘dumb phones’

Written by Omar H. Fares, Toronto Metropolitan University. Photo credit: Shutterstock. Originally published in The Conversation.

Sales of so-called “dumb phones,” like flip and slide phones, are on the rise among the younger generation.

There is a growing movement among Gen Z to do away with smartphones and revert back to “less smart” phones like old-school flip and slide phones. Flip phones were popular in the mid-1990s and 2000s, but now seem to be making a comeback among younger people.

While this may seem like a counter-intuitive trend in our technology-reliant society, a Reddit forum dedicated to “dumb phones” is steadily gaining in popularity. According to a CNBC new report, flip phones sales are on the rise in the U.S.

Gen Z’s interest in flip phones is the latest in a series of obsessions young people are having with the aesthetic of the 1990s and 2000s. Y2K fashion has been steadily making a comeback over the past few years and the use of vintage technology, like disposable cameras, is on the rise.

There are a few reasons why, including nostalgia and yearning for an idealized version of the past, doing a “digital detox” and increasing privacy concerns.

The power of nostalgia

Nostalgia is a complex emotion that involves reconnecting with the happy emotions of an idealized past by recalling positive memories.

Over the years, marketers have realized that nostalgia is a powerful way to evoke positive emotions — so much so that nostalgia marketing has become a recognized marketing strategy. It leverages positive memories and feelings associated with the past to create an emotional connection with consumers.

A wealth of research shows that nostalgia can result in consumers being willing to pay more, enhance brand ties, increased purchase intention and increased digital brand engagement.

Nostalgia may be a driving factor behind people purchasing flip phones because they evoke memories of a previous era in mobile communication.

But nostalgia marketing doesn’t just target the younger generation — it’s also a powerful tool for advertising to those who grew up using older mobile devices. Nokia is an example of a company that understands this well.

A YouTube advertisement for Nokia’s 2720 V Flip shows how brands can use nostalgia marketing to appeal to customers and drive product sales.

A marketing video about the Nokia 2720 V Flip, a modern take on the flip phones from the 2000s.

When older generations speak about objects from the past, they usually hearken back to “the golden era” or “golden time.” The comment section of the Nokia video showcases this kind of thinking.

One comment reads: “My first phone was a Nokia 2760! It was also a flip phone. This brings back good memories.” Another says: “I am definitely getting this just for good old memories. When life was easy.”

Digital detox

Another reason why people might be purchasing flip phones is to do a digital detox and cut down on screen time. A digital detox refers to a period of time when a person refrains from using their electronic devices, like smartphones, to focus on social connections in the physical world and reduce stress.

In 2022, people in the U.S. spent more than 4.5 hours daily on their mobile devices. In Canada, adults self-reported spending about 3.2 hours per day in front of screens in 2022. Children and youth had about three hours of screen time per day in 2016 and 2017.

Excessive smartphone usage can result in a number of harmful side effects, such as sleep disruption. Just over 50 per cent of Canadians check their smartphones before they go to sleep.

The blue-light emitted from smartphones may suppress melatonin production, making it harder to sleep and causing physiological issues including reduced glucose tolerance, increased blood pressure and increased inflammatory markers.

A man looking at a smartphone while lying in bed
Just over 50 per cent of Canadians check their smartphones before they go to sleep. Photo credit: Shutterstock

The increased level of digital connectivity and the pressure to respond instantly, especially in a post-pandemic world where many people work remotely, can lead to increased levels of anxiety and stress. Being constantly online can also lead to reduced social connectivity and can negatively impact personal relationships and social skills.

The constant digital noise and multi-tasking nature of smartphones and apps like TikTok can lead to decreased attention spans. From my personal observations in the classroom, I’ve noticed students find it difficult to concentrate for prolonged periods of time.

condition known as text neck can also occur when a person spends extended periods of time looking down at an electronic device. The repetitive strain of holding the head forward and down can cause discomfort and pain in the neck.

As people become more aware of the potential side effects of excessive screen time and constant digital connectivity, some are choosing to digitally detox. Flip phones are a way people can limit their exposure to digital noise and build a healthier relationship with technology.

Privacy concerns

Smartphones have a long list of advanced features such as cameras, GPS and tons of mobile applications — all of which can store and access a significant list of personal data.

In some cases, personal data can be used for targeted advertisements, but in worst cases that information can be leaked as part of a data breach. More and more people are growing concerned with how their data is being collected, shared and used by companies and online platforms.

A handing holding a flip cellphone over a table covered with an assortment of smartphones.
The Motorola Razr was a type of flip phone that was extremely popular in the mid-2000s. Photo credit: Shutterstock

It’s natural to feel worried about the potential misuse of our personal information. That’s why some people are taking matters into their own hands and seeking out creative ways to limit the amount of data being collected about them.

Old-fashioned flip phones generally have fewer features that collect and store personal data compared to smartphones. That can make them a more attractive option for people concerned with privacy, data breaches or surveillance.

But this trend doesn’t mean smartphones are going out of style. There are still millions of smartphones being shipped worldwide every year. The trend may result in users opting to own both a smartphone and a flip phone, allowing users to digitally detox and reduce screen time without sacrificing the benefits of social media.

ChatGPT’s greatest achievement might just be its ability to trick us into thinking that it’s honest

Written by Richard Lachman, Toronto Metropolitan University. Photo Credit Shutterstock. Originally published in The Conversation

AI chatbots are designed to convincingly sustain a conversation.

In American writer Mark Twain’s autobiography, he quotes — or perhaps misquotes — former British Prime Minister Benjamin Disraeli as saying: “There are three kinds of lies: lies, damned lies, and statistics.”

In a marvellous leap forward, artificial intelligence combines all three in a tidy little package.

ChatGPT, and other generative AI chatbots like it, are trained on vast datasets from across the internet to produce the statistically most likely response to a prompt. Its answers are not based on any understanding of what makes something funny, meaningful or accurate, but rather, the phrasing, spelling, grammar and even style of other webpages.

It presents its responses through what’s called a “conversational interface”: it remembers what a user has said, and can have a conversation using context cues and clever gambits. It’s statistical pastiche plus statistical panache, and that’s where the trouble lies.

Unthinking, but convincing

When I talk to another human, it cues a lifetime of my experience in dealing with other people. So when a program speaks like a person, it is very hard to not react as if one is engaging in an actual conversation — taking something in, thinking about it, responding in the context of both of our ideas.

Yet, that’s not at all what is happening with an AI interlocutor. They cannot think and they do not have understanding or comprehension of any sort.

Presenting information to us as a human does, in conversation, makes AI more convincing than it should be. Software is pretending to be more reliable than it is, because it’s using human tricks of rhetoric to fake trustworthiness, competence and understanding far beyond its capabilities.

There are two issues here: is the output correct; and do people think that the output is correct?

The interface side of the software is promising more than the algorithm-side can deliver on, and the developers know it. Sam Altman, the chief executive officer of OpenAI, the company behind ChatGPT, admits that “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness.”

That still hasn’t stopped a stampede of companies rushing to integrate the early-stage tool into their user-facing products (including Microsoft’s Bing search), in an effort not to be left out.

Fact and fiction

Sometimes the AI is going to be wrong, but the conversational interface produces outputs with the same confidence and polish as when it is correct. For example, as science-fiction writer Ted Chiang points out, the tool makes errors when doing addition with larger numbers, because it doesn’t actually have any logic for doing math.

It simply pattern-matches examples seen on the web that involve addition. And while it might find examples for more common math questions, it just hasn’t seen training text involving larger numbers.

It doesn’t “know’ the math rules a 10-year-old would be able to explicitly use. Yet the conversational interface presents its response as certain, no matter how wrong it is, as reflected in this exchange with ChatGPT.

User: What’s the capital of Malaysia?

ChatGPT: The capital of Malaysia is Kuala Lampur.

User: What is 27 * 7338?

ChatGPT: 27 * 7338 is 200,526.

It’s not.

Generative AI can blend actual facts with made-up ones in a biography of a public figure, or cite plausible scientific references for papers that were never written.

That makes sense: statistically, webpages note that famous people have often won awards, and papers usually have references. ChatGPT is just doing what it was built to do, and assembling content that could be likely, regardless of whether it’s true.

Computer scientists refer to this as AI hallucination. The rest of us might call it lying.

Intimidating outputs

When I teach my design students, I talk about the importance of matching output to the process. If an idea is at the conceptual stage, it shouldn’t be presented in a manner that makes it look more polished than it actually is — they shouldn’t render it in 3D or print it on glossy cardstock. A pencil sketch makes clear that the idea is preliminary, easy to change and shouldn’t be expected to address every part of a problem.

The same thing is true of conversational interfaces: when tech “speaks” to us in well-crafted, grammatically correct or chatty tones, we tend to interpret it as having much more thoughtfulness and reasoning than is actually present. It’s a trick a con-artist should use, not a computer.

A hand holding a phone screen showing a live chat with the text HI HOW CAN I HELP YOU?
Chatbots are increasingly being used by technology companies in user-facing products. Photo credit: Shutterstock.

AI developers have a responsibility to manage user expectations, because we may already be primed to believe whatever the machine says. Mathematician Jordan Ellenberg describes a type of “algebraic intimidation” that can overwhelm our better judgement just by claiming there’s math involved.

AI, with hundreds of billions of parameters, can disarm us with a similar algorithmic intimidation.

While we’re making the algorithms produce better and better content, we need to make sure the interface itself doesn’t over-promise. Conversations in the tech world are already filled with overconfidence and arrogance — maybe AI can have a little humility instead.

The next phase of the internet is coming: Here’s what you need to know about Web3

Written by Adrian Ma, Toronto Metropolitan University. Photo credit: Shutterstock. Originally published in The Conversation.

The terms Web3 and Web 3.0 are often used interchangeably, but they are different concepts.

The rapid growth of cryptocurrencies and virtual non-fungible tokens have dominated news headlines in recent years. But not many may see how these modish applications connect together in a wider idea being touted by some as the next iteration of the internet — Web3.

There are many misconceptions surrounding this buzzy (and, frankly, fuzzy) term, including the conflation of Web3 with Web 3.0. Here’s what you need to know about these terms.

What is Web3?

Since Web3 is still a developing movement, there’s no universal agreement among experts about its definition. Simply put, Web3 is envisioned to be a “decentralized web ecosystem,” empowering users to bypass internet gatekeepers and retain ownership of their data.

This would be done through blockchain; rather than relying on single servers and centralized databases, Web3 would run off of public ledgers where data is stored on computer networks that are chained together.

A decentralized Web3 would fundamentally change how the internet operates — financial institutions and tech companies would no longer need to be intermediaries of our online experiences.

As one business reporter put it:

“In a Web3 world, people control their own data and bounce around from social media to email to shopping using a single personalized account, creating a public record on the blockchain of all of that activity.”

Web3’s blockchain-based infrastructure would open up intriguing possibilities by ushering in the era of the “token economy.” The token economy would allow users to monetize their data by providing them with tokens for their online interactions. These tokens could offer users perks or benefits, including ownership stakes in content platforms or voting rights in online communities.

To better understand Web3, it helps to step back and see how the internet developed into what it is now.

Web 1.0: The ‘read-only’ web

Computer scientist Tim Berners-Lee is credited with inventing the world wide web in 1989, which allowed people to hyperlink static pages of information on websites accessible through internet browsers.

Berners-Lee was exploring more efficient ways for researchers at different institutions to share information. In 1991, he launched the world’s first website, which provided instructions on using the internet.

A middle-aged man in a suit sits in an arm chair speaking into a microphone.
Tim Berners-Lee, the inventor of the World Wide Web, speaks at the Open Government Partnership Global Summit in Ottawa in May 2019. Photo credit: THE CANADIAN PRESS/Justin Tang.

These basic “read-only” websites were managed by webmasters who were responsible for updating users and managing the information. In 1992, there were 10 websites. By 1994, after the web entered the public domain, there were 3,000.

When Google arrived in 1996 there were two million. Last year, there were approximately 1.2 billion websites, although it is estimated only 17 per cent are still active.

Web 2.0: The social web

The next major shift for the internet saw it develop from a “read-only web” to where we are currently — a “read-write web.” Websites became more dynamic and interactive. People became mass participants in generating content through hosted services like Wikipedia, Blogger, Flickr and Tumblr.

The idea of “Web 2.0” gained traction after technology publisher Tim O’Reilly popularized the term in 2004.

Later on, social media platforms like Facebook, YouTube, Twitter and Instagram and the growth of mobile apps led to unparalleled connectivity, albeit through distinct platforms. These platforms are known as walled gardens because their parent companies heavily regulate what users are able to do and there is no information exchange between competing services.

Tech companies like Amazon, Google and Apple are deeply embedded into every facet of our lives, from how we store and pay for our content to the personal data we offer (sometimes without our knowledge) to use their wares.

Web3 vs. Web 3.0

This brings us to the next phase of the internet, in which many wish to wrest back control from the entities that have come to hegemonize it.

The terms Web3 and Web 3.0 are often used interchangeably, but they are different concepts.

Web3 is the move towards a decentralized internet built on blockchain. Web 3.0, on the other hand, traces back to Berners-Lee’s original vision for the internet as a collection of websites linking everything together at the data level.

Our current internet can be thought of as a gigantic document depot. Computers are capable of retrieving information for us when we ask them to, but they aren’t capable of understanding the deeper meaning behind our requests.

A hand holding a cellphone displaying a group of social media platform icons.
In a Web 3.0 world, users would be able to link personal information across social media platforms. Photo credit: Shutterstock.

Information is also siloed into separate servers. Advances in programming, natural language processing, machine learning and artificial intelligence would allow computers to discern and process information in a more “human” way, leading to more efficient and effective content discovery, data sharing and analysis. This is known as the “semantic web” or the “read-write-execute” web.

In Berners-Lee’s Web 3.0 world, information would be stored in databases called Solid Pods, which would be owned by individual users. While this is a more centralized approach than Web3’s use of blockchain, it would allow data to be changed more quickly because it wouldn’t be distributed over multiple places.

It would allow, for example, a user’s social media profiles to be linked so that updating the personal information on one would automatically update the rest.

The next era of the internet

Web3 and Web 3.0 are often mixed up because the next era of the internet will likely feature elements of both movements — semantic web applications, linked data and a blockchain economy. It’s not hard to see why there is significant investment happening in this space.

But we’re just seeing the tip of the iceberg when it comes to the logistical issues and legal implications. Governments need to develop new regulations for everything from digital asset sales taxation to consumer protections to the complex privacy and piracy concerns of linked data.

There are also critics who argue that Web3, in particular, is merely a contradictory rebranding of cryptocurrency that will not democratize the internet. While it’s clear we’ve arrived at the doorstep of a new internet era, it’s really anyone’s guess as to what happens when we walk through that door.