Stay in the Loop

We are thrilled to extend a warm welcome to you as a valuable member of our vibrant crypto community! Whether you're an experienced trader, a crypto enthusiast, or someone who's just getting started on their digital currency journey, we're excited to have you onboard.

Read & Get Inspired

We're delighted to have you here and embark on this exciting journey into the world of Wikibusiness. Whether you're a newcomer or a seasoned explorer in this realm, we're dedicated to making your experience extraordinary. Our website is your gateway to a treasure trove of knowledge, resources, and opportunities.

PrimeHomeDeco

At PrimeHomeDeco, we believe that your home should be a reflection of your style and personality. Our upcoming website is dedicated to bringing you a curated selection of exquisite home decor that will transform your living spaces into elegant sanctuaries. Whether you're looking to revamp your living room, add a touch of sophistication to your bedroom, or create a cozy and inviting ambiance in your dining area, we have just the right pieces for you.

Predicting the “digital superpowers” we could have by 2030



Sign up for the Smarter Faster newsletter

A weekly newsletter featuring the biggest ideas from the smartest people

It’s 2025, the year when mainstream computing will start to shift from a race to develop increasingly powerful tools to a race to develop increasingly powerful abilities. The difference between a tool and an ability is subtle yet profound. From the very first hammerstones to the latest quantum computers, tools are external artifacts that help us humans overcome our organic limitations. Humanity’s ingenious tools have greatly expanded what we can accomplish as individuals, teams, and massive civilizations.

Abilities are different. We experience abilities in the first person as self-embodied capabilities that feel internal and instantly accessible to our conscious minds. For example, language and mathematics are human technologies that we install in our brains and carry around with us throughout our lives, expanding our abilities to think, create, and collaborate. They are genuine superpowers and they feel so inherent to our existence that we rarely think of them as technologies at all. 

“Augmented mentality”

Unlike our verbal and mathematical superpowers, the next wave of superhuman abilities will require some hardware, but we will still experience them as self-embodied skills that we carry around with us throughout our lives. These abilities will emerge from the convergence of AI, augmented reality, and conversational computing. They will be unleashed by context-aware AI agents that are loaded into body-worn devices that see what we see, hear what we hear, experience what we experience, and provide us with enhanced abilities to perceive and interpret our world. I refer to this new technological direction as augmented mentality and I predict that by 2030, a majority of us will live our lives with context-aware AI agents bringing digital superpowers into our daily experiences.  

The majority of these superpowers will be delivered through AI-powered glasses with cameras and microphones that act as their eyes and ears, but there will be other form factors for people who just don’t like eyewear. For example, there will be earbuds that have cameras built in — a reasonable alternative if you don’t have long hair. We will whisper to these intelligent devices, and they will whisper back, giving us recommendations, guidance, spatial reminders, directional cues, haptic nudges, and other verbal and perceptual content that will coach us through our days like an omniscient alter ego.  

How will our superpowers unfold? 

Consider this common scenario: You’re walking downtown and spot a store across the street. You wonder: What time does it open? So, you grab your phone and type (or say) the name of the store. You quickly find the hours on a website and maybe review other info about the store as well. That is the basic tool-use model of computing prevalent today.

Now, let’s look at how Big Tech will transition to an ability computing model:     

Phase 1: You are wearing AI-powered glasses that can see what you see, hear what you hear, and process your surroundings through a multimodal large language model. Now when you spot that store across the street, you simply whisper to yourself, “I wonder when it opens?” and a voice will instantly ring back into your ears, “10:30 a.m.” 

I know this is a subtle shift from asking your phone to look up the name of a store, but it will feel profound. The reason is that the context-aware AI agent will share your personal reality. It’s not merely tracking your location like GPS — it’s seeing what you see, hearing what you hear, and paying attention to what you are paying attention to. This will make it feel far less like a tool and far more like an internal ability directly linked to your own first-person experiences. 

In addition, it will not be a one-way interaction in which we ask the AI agent for assistance. The AI agent will often be proactive and will ask us questions based on the context of our world (listen to this fun audio-play for examples). And when we are questioned by the AI that whispers in our ears, we will often answer by just nodding our heads to affirm or shaking our heads to reject. It will feel so natural and seamless that we might not even consciously realize that we replied. It will feel like a deliberation within ourselves.

Phase 2: By 2030, we will not need to whisper to the AI agents traveling with us through our lives. Instead, you will be able to simply mouth the words, and the AI will know what you are saying by reading your lips and detecting activation signals from your muscles. I am confident that “mouthing” will be deployed because it’s more private, more resilient to noisy spaces, and most importantly, it will feel more personal, internal, and self-embodied. 

Phase 3: By 2035, you may not even need to mouth the words. That’s because the AI will learn to interpret the signals in our muscles with such subtlety and precision — we will simply need to think about mouthing the words to convey our intent. You will be able to focus your attention on any item or activity in your world and think something and useful information will ring back from your AI glasses like an all-knowing alter ego in your head.

Of course, the capabilities will go far beyond just wondering about items and activities around you. That’s because the onboard AI that shares your first-person reality will learn to anticipate the information you desire before you even ask for it. For example, when a coworker approaches from down the hall, and you can’t quite remember her name, the AI will sense your unease and a voice will ring: “Jenny from quantum computing.”

Or when you grab a box of cereal in a store and are curious about the carbs, or wonder whether it’s cheaper at Walmart, the answers will just ring in your ears or appear visually. It will even give you superhuman abilities to assess the emotions on other people’s faces, predict their moods, goals, or intentions, coaching you during real-time conversations to make you more compelling, appealing, or persuasive (see a fun video example). 

As AI-powered glasses add mixed-reality features that incorporate seamless visual content into our surroundings, these devices will give us literal superpowers, like X-ray vision. For example, the hardware will have access to digital models of your home and will use it to let you peer through the walls and instantly find studs, pipes, or wiring.

I know some people will be skeptical about my prediction of mass adoption by 2030, but I don’t make these claims lightly. I have been focused on technologies that augment our reality and expand human abilities for over 30 years and I can say without question that the mobile computing market is about to run in this direction in a very big way.  

Over the past 12 months, two of the most influential and innovative companies in the world, Meta and Google, revealed their goal to give us superpowers. Meta made the first big move by adding a context-aware AI to their Ray-Ban glasses and by showing off their Orion mixed reality prototype that adds impressive visual capabilities. Meta is now very well positioned to leverage their big investments in AI and XR and become a major player in the mobile computing market and they will likely do it by selling us superpowers we can’t resist.   

Not to be outdone, Google recently announced Android XR, a new AI-powered operating system for augmenting our world with seamless context-aware content. They also announced a partnership with Samsung to bring new glasses and headsets to market. With over 70% market share for mobile operating systems and an increasingly strong AI presence with Gemini, Google is well-positioned to be the leading provider of technology-enabled human superpowers within the next 18 months.

But what about the risks? 

To quote the famous 1962 Spiderman comic, “With great power comes great responsibility.” This wisdom is literally about superpowers. The difference is that primary responsibility will not fall on the consumers who receive these techno-powers but on the companies that provide them and the regulators that oversee them. 

After all, when wearing AI-powered AR eyewear, each of us could find ourselves in a new reality where technologies controlled by third parties can selectively alter what we see and hear, while AI-powered voices whisper in our ears with targeted advice and guidance. While the intentions might be positive, the potential for abuse is equally profound.   

To avoid the dystopian outcomes, my most significant recommendation to both consumers and manufacturers is to adopt a subscription business model. If the arms race for selling superpowers is driven by which company can provide the most amazing new abilities for a reasonable monthly fee, then we will all benefit. If instead, the business model becomes a competition to monetize superpowers by delivering the most effective targeted influence into our eyes and ears, consumers could easily be manipulated throughout our daily lives. 

I know some people find the concept of AI-powered glasses invasive or even creepy and can’t imagine wanting or needing these products. I understand the sentiment, but by 2030 the superpowers that these devices give us won’t feel optional. After all, not having them could put us at a social and cognitive disadvantage. It is now up to the industry and regulators to ensure that we roll out these new abilities in a manner that is not intrusive, invasive, manipulative, or dangerous. It requires careful planning and oversight.

Sign up for the Smarter Faster newsletter

A weekly newsletter featuring the biggest ideas from the smartest people



Source link

Related articles

BrittVille – Real Estate HTML5 Bootstrap 4 Website Template

LIVE PREVIEWBUY FOR $14 BrittVille is High-quality Bootstrap 4 Based Real Estate Website Template Comes With Impressive Design, All Essential Pages/Features to Build Any Type of CMS or Website for Real Estate, Property, Real Estate...

Tesla battery supplier CATL tagged by Washington 

Tesla’s battery supplier in China, CATL, was tagged by Washington as a company that works with the Chinese military.  Lawmakers in Washington argue that CATL’s ties to the Chinese military may result in security...

comp-cb

Product Name: comp-cb Click here to get comp-cb at discounted price while it's still available... All orders are protected by SSL encryption – the highest industry standard for online security from trusted vendors. comp-cb is backed with...

Lenovo 15.6″ FHD IdeaPad Business Laptop Computer, 40GB RAM 2TB SSD, Intel Core i5, Windows 11 Pro Laptop with Microsoft Office Lifetime Suite, Numeric...

Price: (as of - Details) This laptop is equipped with the powerful Intel Core i5 processor, along with generous RAM and Storage. Enjoy effortless multitasking on its 15.6" FHD display with an...
[mwai_chat model="gpt-4"]
Exit mobile version