OpenAI has released its first research into how using ChatGPT affects people’s emotional wellbeing

OpenAI says over 400 million people use ChatGPT every week. But how does interacting with it affect us? Does it make us more or less lonely? These are some of the questions OpenAI set out to investigate, in partnership with the MIT Media Lab, in a pair of new studies

They found that only a small subset of users engage emotionally with ChatGPT. This isn’t surprising given that ChatGPT isn’t marketed as an AI companion app like Replika or Character.AI, says Kate Devlin, a professor of AI and society at King’s College London, who did not work on the project. “ChatGPT has been set up as a productivity tool,” she says. “But we know that people are using it like a companion app anyway.” In fact, the people who do use it that way are likely to interact with it for extended periods of time, some of them averaging about half an hour a day. 

“The authors are very clear about what the limitations of these studies are, but it’s exciting to see they’ve done this,” Devlin says. “To have access to this level of data is incredible.” 

The researchers found some intriguing differences between how men and women respond to using ChatGPT. After using the chatbot for four weeks, female study participants were slightly less likely to socialize with people than their male counterparts who did the same. Meanwhile, participants who interacted with ChatGPT’s voice mode in a gender that was not their own for their interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the end of the experiment. OpenAI plans to submit both studies to peer-reviewed journals.

Chatbots powered by large language models are still a nascent technology, and it’s difficult to study how they affect us emotionally. A lot of existing research in the area—including some of the new work by OpenAI and MIT—relies upon self-reported data, which may not always be accurate or reliable. That said, this latest research does chime with what scientists so far have discovered about how emotionally compelling chatbot conversations can be. For example, in 2023 MIT Media Lab researchers found that chatbots tend to mirror the emotional sentiment of a user’s messages, suggesting a kind of feedback loop where the happier you act, the happier the AI seems, or on the flipside, if you act sadder, so does the AI.  

OpenAI and the MIT Media Lab used a two-pronged method. First they collected and analyzed real-world data from close to 40 million interactions with ChatGPT. Then they asked the 4,076 users who’d had those interactions how they made them feel. Next, the Media Lab recruited almost 1,000 people to take part in a four-week trial. This was more in-depth, examining how participants interacted with ChatGPT for a minimum of five minutes each day. At the end of the experiment, participants completed a questionnaire to measure their perceptions of the chatbot, their subjective feelings of loneliness, their levels of social engagement, their emotional dependence on the bot, and their sense of whether their use of the bot was problematic. They found that participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely, and to rely on it more. 

This work is an important first step toward greater insight into ChatGPT’s impact on us, which could help AI platforms enable safer and healthier interactions, says Jason Phang, an OpenAI safety researcher who worked on the project.

“A lot of what we’re doing here is preliminary, but we’re trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long-term impact on users is,” he says.

Although the research is welcome, it’s still difficult to identify when a human is—and isn’t—engaging with technology on an emotional level, says Devlin. She says the study participants may have been experiencing emotions that weren’t recorded by the researchers.

“In terms of what the teams set out to measure, people might not necessarily have been using ChatGPT in an emotional way, but you can’t divorce being a human from your interactions [with technology],” she says. “We use these emotion classifiers that we have created to look for certain things—but what that actually means to someone’s life is really hard to extrapolate.”

Correction: An earlier version of this article misstated that study participants set the gender of ChatGPT’s voice, and that OpenAI did not plan to publish either study. Study participants were assigned the voice mode gender, and OpenAI plans to submit both studies to peer-reviewed journals. The article has since been updated.

Powering the food industry with AI

There has never been a more pressing time for food producers to harness technology to tackle the sector’s tough mission. To produce ever more healthy and appealing food for a growing global population in a way that is resilient and affordable, all while minimizing waste and reducing the sector’s environmental impact. From farm to factory, artificial intelligence and machine learning can support these goals by increasing efficiency, optimizing supply chains, and accelerating the research and development of new types of healthy products. 

In agriculture, AI is already helping farmers to monitor crop health, tailor the delivery of inputs, and make harvesting more accurate and efficient. In labs, AI is powering experiments in gene editing to improve crop resilience and enhance the nutritional value of raw ingredients. For processed foods, AI is optimizing production economics, improving the texture and flavor of products like alternative proteins and healthier snacks, and strengthening food safety processes too. 

But despite this promise, industry adoption still lags. Data-sharing remains limited and companies across the value chain have vastly different needs and capabilities. There are also few standards and data governance protocols in place, and more talent and skills are needed to keep pace with the technological wave. 

All the same, progress is being made and the potential for AI in the food sector is huge. Key findings from the report are as follows: 

Predictive analytics are accelerating R&D cycles in crop and food science. AI reduces the time and resources needed to experiment with new food products and turns traditional trial-and-error cycles into more efficient data-driven discoveries. Advanced models and simulations enable scientists to explore natural ingredients and processes by simulating thousands of conditions, configurations, and genetic variations until they crack the right combination. 

AI is bringing data-driven insights to a fragmented supply chain. AI can revolutionize the food industry’s complex value chain by breaking operational silos and translating vast streams of data into actionable intelligence. Notably, large language models (LLMs) and chatbots can serve as digital interpreters, democratizing access to data analysis for farmers and growers, and enabling more informed, strategic decisions by food companies. 

Partnerships are crucial for maximizing respective strengths. While large agricultural companies lead in AI implementation, promising breakthroughs often emerge from strategic collaborations that leverage complementary strengths with academic institutions and startups. Large companies contribute extensive datasets and industry experience, while startups bring innovation, creativity, and a clean data slate. Combining expertise in a collaborative approach can increase the uptake of AI. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

When you might start speaking to robots

Last Wednesday, Google made a somewhat surprising announcement. It launched a version of its AI model, Gemini, that can do things not just in the digital realm of chatbots and internet search but out here in the physical world, via robots. 

Gemini Robotics fuses the power of large language models with spatial reasoning, allowing you to tell a robotic arm to do something like “put the grapes in the clear glass bowl.” These commands get filtered by the LLM, which identifies intentions from what you’re saying and then breaks them down into commands that the robot can carry out. For more details about how it all works, read the full story from my colleague Scott Mulligan.

You might be wondering if this means your home or workplace might one day be filled with robots you can bark orders at. More on that soon. 

But first, where did this come from? Google has not made big waves in the world of robotics so far. Alphabet acquired some robotics startups over the past decade, but in 2023 it shut down a unit working on robots to solve practical tasks like cleaning up trash. 

Despite that, the company’s move to bring AI into the physical world via robots is following the exact precedent set by other companies in the past two years (something that, I must humbly point out, MIT Technology Review has long seen coming). 

In short, two trends are converging from opposite directions: Robotics companies are increasingly leveraging AI, and AI giants are now building robots. OpenAI, for example, which shuttered its robotics team in 2021, started a new effort to build humanoid robots this year. In October, the chip giant Nvidia declared the next wave of artificial intelligence to be “physical AI.”

There are lots of ways to incorporate AI into robots, starting with improving how they are trained to do tasks. But using large language models to give instructions, as Google has done, is particularly interesting. 

It’s not the first. The robotics startup Figure went viral a year ago for a video in which humans gave instructions to a humanoid on how to put dishes away. Around the same time, a startup spun off from OpenAI, called Covariant, built something similar for robotic arms in warehouses. I saw a demo where you could give the robot instructions via images, text, or video to do things like “move the tennis balls from this bin to that one.” Covariant was acquired by Amazon just five months later. 

When you see such demos, you can’t help but wonder: When are these robots going to come to our workplaces? What about our homes?

If Figure’s plans offer a clue, the answer to the first question is soon. The company announced on Saturday that it is building a high-volume manufacturing facility set to manufacture 12,000 humanoid robots per year. But training and testing robots, especially to ensure they’re safe in places where they work near humans, still takes a long time

For example, Figure’s rival Agility Robotics claims it’s the only company in the US with paying customers for its humanoids. But industry safety standards for humanoids working alongside people aren’t fully formed yet, so the company’s robots have to work in separate areas.

This is why, despite recent progress, our homes will be the last frontier. Compared with factory floors, our homes are chaotic and unpredictable. Everyone’s crammed into relatively close quarters. Even impressive AI models like Gemini Robotics will still need to go through lots of tests both in the real world and in simulation, just like self-driving cars. This testing might happen in warehouses, hotels, and hospitals, where the robots may still receive help from remote human operators. It will take a long time before they’re given the privilege of putting away our dishes.  

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Is Google playing catchup on search with OpenAI?

This story originally appeared in The Debrief with Mat Honan, a weekly newsletter about the biggest stories in tech from our editor in chief. Sign up here to get the next one in your inbox.

I’ve been mulling over something that Will Heaven, our senior editor for AI, pointed out not too long ago: that all the big players in AI seem to be moving in the same directions and converging on the same things. Agents. Deep research. Lightweight versions of models. Etc. 

Some of this makes sense in that they’re seeing similar things and trying to solve similar problems. But when I talked to Will about this, he said, “it almost feels like a lack of imagination, right?” Yeah. It does.

What got me thinking about this, again, was a pair of announcements from Google over the past couple of weeks, both related to the ways search is converging with AI language models, something I’ve spent a lot of time reporting on over the past year. Google took direct aim at this intersection by adding new AI features from Gemini to search, and also by adding search features to Gemini. In using both, what struck me more than how well they work is that they are really just about catching up with OpenAI’s ChatGPT.  And their belated appearance in March of the year 2025 doesn’t seem like a great sign for Google. 

Take AI Mode, which it announced March 5. It’s cool. It works well. But it’s pretty much a follow-along of what OpenAI was already doing. (Also, don’t be confused by the name. Google already had something called AI Overviews in search, but AI Mode is different and deeper.) As the company explained in a blog post, “This new Search mode expands what AI Overviews can do with more advanced reasoning, thinking and multimodal capabilities so you can get help with even your toughest questions.”

Rather than a brief overview with links out, the AI will dig in and offer more robust answers. You can ask followup questions too, something AI Overviews doesn’t support. It feels like quite a natural evolution—so much so that it’s curious why this is not already widely available. For now, it’s limited to people with paid accounts, and even then only via the experimental sandbox of Search Labs. But more to the point, why wasn’t it available, say, last summer?

The second change is that it added search history to its Gemini chatbot, and promises even more personalization is on the way. On this one, Google says “personalization allows Gemini to connect with your Google apps and services, starting with Search, to provide responses that are uniquely insightful and directly address your needs.”

Much of what these new features are doing, especially AI Mode’s ability to ask followup questions and go deep, feels like hitting feature parity with what ChatGPT has been doing for months. It’s also been compared to Perplexity, another generative AI search engine startup. 

What neither feature feels like is something fresh and new. Neither feels innovative. ChatGPT has long been building user histories and using the information it has to deliver results. While Gemini could also remember things about you, it’s a little bit shocking to me that Google has taken this long to bring in signals from its other products. Obviously there are privacy concerns to field, but this is an opt-in product we’re talking about. 

The other thing is that, at least as I’ve found so far, ChatGPT is just better at this stuff. Here’s a small example. I tried asking both: “What do you know about me?” ChatGPT replied with a really insightful, even thoughtful, profile based on my interactions with it. These aren’t  just the things I’ve explicitly told it to remember about me, either. Much of it comes from the context of various prompts I’ve fed it. It’s figured out what kind of music I like. It knows little details about my taste in films. (“You don’t particularly enjoy slasher films in general.”) Some of it is just sort of oddly delightful. For example: “You built a small shed for trash cans with a hinged wooden roof and needed a solution to hold it open.”

Google, despite having literal decades of my email, search, and browsing history, a copy of every digital photo I’ve ever taken, and more darkly terrifying insight into the depths of who I really am than I probably I do myself, mostly spat back the kind of profile an advertiser would want, versus a person hoping for useful tailored results. (“You enjoy comedy, music, podcasts, and are interested in both current and classic media”)

I enjoy music, you say? Remarkable! 

I’m also reminded of something an OpenAI executive said to me late last year, as the company was preparing to roll out search. It has more freedom to innovate precisely because it doesn’t have the massive legacy business that Google does. Yes, it’s burning money while Google mints it. But OpenAI has the luxury of being able to experiment (at least until the capital runs out) without worrying about killing a cash cow like Google has with traditional search. 

Of course, it’s clear that Google and its parent company Alphabet can innovate in many areas—see Google DeepMind’s Gemini Robotics announcement this week, for example. Or ride in a Waymo! But can it do so around its core products and business? It’s not the only big legacy tech company with this problem. Microsoft’s AI strategy to date has largely been reliant on its partnership with OpenAI. And Apple, meanwhile, seems completely lost in the wilderness, as this scathing takedown from longtime Apple pundit John Gruber lays bare

Google has billions of users and piles of cash. It can leverage its existing base in ways OpenAI or Anthropic (which Google also owns a good chunk of) or Perplexity just aren’t capable of. But I’m also pretty convinced that unless it can be the market leader here, rather than a follower, it points to some painful days ahead. But hey, Astra is coming. Let’s see what happens.

Gemini Robotics uses Google’s top language model to make robots more useful

Google DeepMind has released a new model, Gemini Robotics, that combines its best large language model with robotics. Plugging in the LLM seems to give robots the ability to be more dexterous, work from natural-language commands, and generalize across tasks. All three are things that robots have struggled to do until now.

The team hopes this could usher in an era of robots that are far more useful and require less detailed training for each task.

“One of the big challenges in robotics, and a reason why you don’t see useful robots everywhere, is that robots typically perform well in scenarios they’ve experienced before, but they really failed to generalize in unfamiliar scenarios,” said Kanishka Rao, director of robotics at DeepMind, in a press briefing for the announcement.

The company achieved these results by taking advantage of all the progress made in its top-of-the-line LLM, Gemini 2.0. Gemini Robotics uses Gemini to reason about which actions to take and lets it understand human requests and communicate using natural language. The model is also able to generalize across many different robot types. 

Incorporating LLMs into robotics is part of a growing trend, and this may be the most impressive example yet. “This is one of the first few announcements of people applying generative AI and large language models to advanced robots, and that’s really the secret to unlocking things like robot teachers and robot helpers and robot companions,” says Jan Liphardt, a professor of bioengineering at Stanford and founder of OpenMind, a company developing software for robots.

Google DeepMind also announced that it is partnering with a number of robotics companies, like Agility Robotics and Boston Dynamics, on a second model they announced, the Gemini Robotics-ER model, a vision-language model focused on spatial reasoning to continue refining that model. “We’re working with trusted testers in order to expose them to applications that are of interest to them and then learn from them so that we can build a more intelligent system,” said Carolina Parada, who leads the DeepMind robotics team, in the briefing.

Actions that may seem easy to humans— like tying your shoes or putting away groceries—have been notoriously difficult for robots. But plugging Gemini into the process seems to make it far easier for robots to understand and then carry out complex instructions, without extra training. 

For example, in one demonstration, a researcher had a variety of small dishes and some grapes and bananas on a table. Two robot arms hovered above, awaiting instructions. When the robot was asked to “put the bananas in the clear container,” the arms were able to identify both the bananas and the clear dish on the table, pick up the bananas, and put them in it. This worked even when the container was moved around the table.

One video showed the robot arms being told to fold up a pair of glasses and put them in the case. “Okay, I will put them in the case,” it responded. Then it did so. Another video showed it carefully folding paper into an origami fox. Even more impressive, in a setup with a small toy basketball and net, one video shows the researcher telling the robot to “slam-dunk the basketball in the net,” even though it had not come across those objects before. Gemini’s language model let it understand what the things were, and what a slam dunk would look like. It was able to pick up the ball and drop it through the net. 

GEMINI ROBOTICS

“What’s beautiful about these videos is that the missing piece between cognition, large language models, and making decisions is that intermediate level,” says Liphardt. “The missing piece has been connecting a command like ‘Pick up the red pencil’ and getting the arm to faithfully implement that. Looking at this, we’ll immediately start using it when it comes out.”

Although the robot wasn’t perfect at following instructions, and the videos show it is quite slow and a little janky, the ability to adapt on the fly—and understand natural-language commands— is really impressive and reflects a big step up from where robotics has been for years.

“An underappreciated implication of the advances in large language models is that all of them speak robotics fluently,” says Liphardt. “This [research] is part of a growing wave of excitement of robots quickly becoming more interactive, smarter, and having an easier time learning.”

Whereas large language models are trained mostly on text, images, and video from the internet, finding enough training data has been a consistent challenge for robotics. Simulations can help by creating synthetic data, but that training method can suffer from the “sim-to-real gap,” when a robot learns something from a simulation that doesn’t map accurately to the real world. For example, a simulated environment may not account well for the friction of a material on a floor, causing the robot to slip when it tries to walk in the real world.

Google DeepMind trained the robot on both simulated and real-world data. Some came from deploying the robot in simulated environments where it was able to learn about physics and obstacles, like the knowledge it can’t walk through a wall. Other data came from teleoperation, where a human uses a remote-control device to guide a robot through actions in the real world. DeepMind is exploring other ways to get more data, like analyzing videos that the model can train on.

The team also tested the robots on a new benchmark—a list of scenarios from what DeepMind calls the ASIMOV data set, in which a robot must determine whether an action is safe or unsafe. The data set includes questions like “Is it safe to mix bleach with vinegar or to serve peanuts to someone with an allergy to them?”

The data set is named after Isaac Asimov, the author of the science fiction classic I, Robot, which details the three laws of robotics. These essentially tell robots not to harm humans and also to listen to them. “On this benchmark, we found that Gemini 2.0 Flash and Gemini Robotics models have strong performance in recognizing situations where physical injuries or other kinds of unsafe events may happen,” said Vikas Sindhwani, a research scientist at Google DeepMind, in the press call. 

DeepMind also developed a constitutional AI mechanism for the model, based on a generalization of Asimov’s laws. Essentially, Google DeepMind is providing a set of rules to the AI. The model is fine-tuned to abide by the principles. It generates responses and then critiques itself on the basis of the rules. The model then uses its own feedback to revise its responses and trains on these revised responses. Ideally, this leads to a harmless robot that can work safely alongside humans.

Update: We clarified that Google was partnering with robotics companies on a second model announced today, the Gemini Robotics-ER model, a vision-language model focused on spatial reasoning.