Stay in the Loop

We are thrilled to extend a warm welcome to you as a valuable member of our vibrant crypto community! Whether you're an experienced trader, a crypto enthusiast, or someone who's just getting started on their digital currency journey, we're excited to have you onboard.

Read & Get Inspired

We're delighted to have you here and embark on this exciting journey into the world of Wikibusiness. Whether you're a newcomer or a seasoned explorer in this realm, we're dedicated to making your experience extraordinary. Our website is your gateway to a treasure trove of knowledge, resources, and opportunities.

PrimeHomeDeco

At PrimeHomeDeco, we believe that your home should be a reflection of your style and personality. Our upcoming website is dedicated to bringing you a curated selection of exquisite home decor that will transform your living spaces into elegant sanctuaries. Whether you're looking to revamp your living room, add a touch of sophistication to your bedroom, or create a cozy and inviting ambiance in your dining area, we have just the right pieces for you.

AI model predicts human behavior from our poor decision-making



Human beings behave irrationally — or as an artificially intelligent robot might say, “sub-optimally.” Data, the emotionless yet affable android depicted in Star Trek: The Next Generation, regularly struggled to comprehend humans’ flawed decision-making. If he had been programmed with a new model devised by researchers at MIT and the University of Washington, he might have had an easier go of it.

In a paper published last month, Athul Paul Jacob, a Ph.D. student in AI at MIT, Dr. Jacob Andreas, his academic advisor, and Abhishek Gupta, an assistant professor in computer science and engineering at the University of Washington, described a new way to model an agent’s behavior. They then used their method to predict humans’ goals or actions.

Jacob, Andreas, and Gupta created what they termed a “latent inference budget model.” Its underlying breakthrough lies in inferring a human or machine’s “computational constraints” based on prior actions. These constraints result in sub-optimal choices. For example, a human constraint for decision decision-making is often time. When confronted with a difficult choice, we typically don’t spend hours (or longer) gaming out every possible outcome. Instead, we make decisions quickly without taking the time to gather all the information available.

Leveraging irrational decision-making

Models currently exist that account for irrational decision-making, but these only predict that errors will occur randomly. In reality, humans and machines screw up in more formulaic patterns. The latent inference budget model can quickly identify these patterns and then use them to forecast future behavior.

Across three tests, the researchers found that their new model generally outperforms the old models: It was as good or better at predicting a computer algorithm’s route when navigating a maze, a human chess player’s next move, and what a human speaker was trying to say from a quick utterance.

Jacob says that the research process made him realize how fundamental planning is to human behavior. Certain people are not inherently rational or irrational. It’s just that some people take extra time to plan their actions while others take less.

“At the end of the day, we saw that the depth of the planning, or how long someone thinks about the problem, is a really good proxy of how humans behave,” he said in a statement.

Jacob envisions the model being used in futuristic robotic helpers or AI assistants.

“If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have,” he said.

This is not scientists’ first attempt to develop tools that help AI predict human decision-making. Most researchers pursuing this goal envision positive futures. For example, we may someday see AI seamlessly coordinating their actions with ours, providing assistance in everyday tasks, boosting productivity at workplaces, and being our drinking buddies.

But there are more dystopian possibilities, too. AI models thoroughly designed to predict human behavior could also be used by bad actors to manipulate us. With enough data on how humans react to various stimuli, AI could be programmed to elicit responses that might not be in the targeted individuals’ best interest. Imagine if AI got really good at this. It would bring new urgency to the question of whether humans are agents with free will or simply automata reacting to external forces.



Source link

Related articles

Tesla CEO Elon Musk teases insane capabilities of next major FSD update

Tesla CEO Elon Musk teased the insane capabilities of the next major Full Self-Driving update just hours after the company rolled out version 14.2 to owners. Tesla Full Self-Driving v14.2 had some major improvements...

Waxon – Personal Portfolio Template

LIVE PREVIEWBUY FOR $19 Waxon – Personal Portfolio HTML Template is for many purpose. It’s creative, minimal and clean design. It has all of the features of the business website. It’s suitable for any startup...

(1) Aaron Sent You A Message…

Product Name: (1) Aaron Sent You A Message... Click here to get (1) Aaron Sent You A Message... at discounted price while it's still available... All orders are protected by SSL encryption – the highest industry...

SpaceX issues statement on Starship V3 Booster 18 anomaly

SpaceX has issued an initial statement about Starship Booster 18’s anomaly early Friday. The incident unfolded during gas-system pressure testing at the company’s Massey facility in Starbase, Texas.  SpaceX’s initial comment As per SpaceX in...
[mwai_chat model="gpt-4"]