Futurism logo

How machines can learn from human behaviour

Designing intelligent machines that can resemble and model human behaviour

By Susan Fourtané Published a day ago Updated a day ago 4 min read
Top Story - March 2026
How machines can learn from human behaviour
Photo by Amos K on Unsplash

In order to understand where we are and where we are going, we need to understand where we were first. - Susan Fourtane

Could a human behaviour simulator be embedded into a robot or online avatar to the point that it’s hard to distinguish between a real person or artificial intelligence? Scientists have been upping the stakes in this “Turing test” for years, to the point that human-mimicking programmes are ready to answer tricky questions, assist people with online shopping or be companions.

Back in 2018, researchers across a clutch of European universities developed a large-scale experimental lab programme involving over 22,000 people to investigate the socio-economic problems that arise from human-computer interactions. At the European IBSEN project, the researchers’ goal was to provide a breakthrough towards building a future human behaviour simulator, a technology that could impact fields from robotics to economics, and offer new instruments to policy makers.

According to project coordinator Anxo Sanchez, professor of Applied Mathematics at Universidad Carlos III de Madrid, Spain, the advance needed was the capability to do experiments with their actions. “Once you have an ample repertoire of behaviours, you could go for a simulator in which there are a number of computer agents that interact with each other with the rules you have, and therefore give rise to collective behaviour which should mimic that of society,” says Sanchez.

The researchers were looking at demonstration cases such as cooperation in social networks, where participants had to decide whether they wanted to collaborate with teammates toward some common goals, and how to foster and maintain that synergy when some were tempted to cheat and let the others do the work.

For example, in their experiments around 1000 people were asked to decide how much money they wanted to give to a common pot, which will be equally shared among all participants, irrespective of their contribution. They then keep whatever they didn’t contribute, along with their part of the common pot.

The researchers realised that, after a few rounds, some contributors gave no money but benefited from the common pot, leading the others to reduce their sums and, eventually, not to donate anymore. However, in their large-scale experiments if people couldn't possibly track down everyone else’s contributions, this made the whole scenario different, Sanchez explains.

These experiments reveal that human behaviour depends on the way participants are informed of the outcome of previous rounds. “This is letting us model how people in a cooperative situation like this behave depending on the information received, and gives hints as to how more contributions to the common good can be promoted.”

Researchers were also looking to predict how people react when looking at online avatars in the business environment, such as online buying and selling. For example, Creative Virtual, a London-based company that offers virtual assistant avatars for customer service, already provides organisations with the capability to embed ‘personality’ and ‘emotions’ into online chatbots. “We see people interact with chatbots for longer when they are represented by an avatar that contains our small-talk module,” says Chris Ezekiel, founder and CEO of the company. “We even see people build ‘relationships’ with them, and this type of behaviour will only increase when they are combined with robots.”

One area where predictive power in human-computer interactions would be useful is in the service and healthcare industries. Cristina Andersson, consultant and coordinator for the national AiRo (Artificial Intelligence and Robotics) in welfare programme in Finland explained that “a hospitality robot needs to behave in a way that is accepted by humans whereas a manufacturing robot just does its job.” She says when incorporating behavioural rules into machines “there should be a piece of code somewhere saying that robots must obey the law. Then they will play the same game as we do.”

These types of experiments can make online help and support services more human-like. A key question is what kind of human behaviour and how much of it should be incorporated into artificial intelligent machines and who will account responsible for their behaviour. “As long as the robots are not autonomous, the owner should be responsible,” says Andersson. “Or the user, if the user can change the robot’s behaviour.”

Like many other Future and Emerging Technologies (FET) projects, there is an element of risk and the researchers concede that the experimental design may not necessarily yield solid and stable answers to many questions. Social human behaviour is extremely fluid and, at times, rather irrational.

Indeed, where we stand today and how quickly this technology has evolved, further embedding more advanced social human behaviour into robots would not be wise. In fact, it would be a risk for the future human society. Indestructible machines that can act and behave like humans and override their initial programmes will present a risk to humanity. The fact that it can be done does not mean it should be done.

A/N: I conducted all the interviews and wrote the original article for Youris.com, the European Research Media Centre in 2018. I have now edited and updated it in 2026 to understand and reflect the evolution of the human desire to embed human-like intelligence and advanced human-like behaviour into machines. In other words, revisiting how it all started gives us an indea of where we are going as the artificial intelligence, machine learning, AI agents, and robotics technologies evolve. It also shows us how fast AI will be moving going forward.

Continue reading:

artificial intelligenceevolutionfuturehumanityinterviewopinionsciencetech

About the Creator

Susan Fourtané

Susan Fourtané is a Science and Technology Journalist, a professional writer with over 18 years experience writing for global media and industry publications. She's a member of the ABSW, WFSJ, Society of Authors, and London Press Club.

Reader insights

Nice work

Very well written. Keep up the good work!

Top insight

  1. Excellent storytelling

    Original narrative & well developed characters

Add your insights

Comments (6)

Sign in to comment
  • Kelli Sheckler-Amsdenabout 10 hours ago

    I don't really use AI intentionally (besides siri for the weather or google research) I guess it scares me a little. Congratulations on your top story, Susan

  • Melissa Ingoldsbyabout 10 hours ago

    This was a very informative and interesting read I do think that prompting an ai to help understand how it should behave is similar to studies done wherein people are “prompted” and then act accordingly. It is such an intuitive concept I think that we all act on the information we receive.

  • Miss Beyabout 11 hours ago

    Congratulations Susan beautiful piece 🙏✨️♥️🙂

  • Calvin Londonabout 18 hours ago

    Congratulations on your Top Story, Susan. QA really interesting article. I think AI is getting out of control. I wrote about my feelings in my story today: https://exclusiveofferhub.life/01/i-do-not-like-ai-does-that-mean-i-am-old%3C/span%3E%3C/span%3E%3C/span%3E%3C/a%3E%3C/p%3E%3C/div%3E%3C/div%3E%3C/div%3E%3Cdiv class="css-w4qknv-Replies">

  • Dharrsheena Raja Segarranabout 20 hours ago

    Am I the only one who wants AI to be as human as it can be? 😅😅 Congratulations on your Top Story! 🎉💖🎊🎉💖🎊 There's a small typo to the spelling of idea* in your Author's Notes: "In other words, revisiting how it all started gives us an indea of where we are going as the artificial intelligence"

  • James Donahueabout 23 hours ago

    I hate this with a passion. I guess AI is ok for writing technical manual, but it's limitations are still there in that it can't tell a diamond from a dingleberry. The biggest problem I see is one that I hope never gets overcome. People are all unique. You can't copy personality because people are always changing. So they can't copy even one person, let alone 8 trillion. Chess is just one thing that they should not be allowed to do. AI is terrorist at its core, as it's aim is to make everyone the same as a flawed person by causing great pain and harm to that one person while still not copying the most important parts. And I didn't even go into the vast copyright infringements that prove AI to be trying to ruin art and the lives associated. Anything that touts the positives of AI without telling of the downfalls is complete trash and I think may have been written by AI. AI is a great failure of mankind and is even responsible for some IDF acts of genocide. Do you want to be a robot?

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.