OpenAI takes a new approach to combat “hallucinating” artificial intelligence

OpenAI announced Wednesday that it is taking on hallucinations against artificial intelligence, using a new method to train AI models.

The research comes at a time when misinformation emanating from AI systems is more controversial than ever, amid the generative AI boom and the run-up to the 2024 US presidential election. OpenAI accelerated the generative AI boom last year when it released ChatGPT, its private chatbot. It is supported by GPT-3 and GPT-4, has surpassed 100 million monthly users in two months, and is said to set a record for the fastest growing application. to go on a date, Microsoft It has invested more than $13 billion in OpenAI, and the startup has been valued at about $29 billion.

AI hallucinations occur when models such as ChatGPT or GoogleBard completely fabricates information, acting as if they were just blowing facts. One example: In Google’s February promotional video for Bard, the chatbot does a Invalid claim About the James Webb Space Telescope. More recently, ChatGPT cited “mock” cases in a New York federal court filing, and the New York attorneys involved could face penalties.

“Even modern models tend to produce falsehoods — they show a tendency to invent facts in moments of uncertainty,” the OpenAI researchers wrote in the report. “These hallucinations are particularly problematic in areas that require multistep thinking, because a single logical error is enough to derail a much larger solution.”

OpenAI’s potential new strategy for fighting slurs: train AI models to reward themselves for each correct individual step of reasoning when they reach an answer, rather than simply rewarding the correct end result. This approach is called “process stewardship,” rather than “outcome stewardship,” and could lead to better interpretable AI, according to the researchers, as the strategy encourages models to follow a chain of human-like “thoughts.”

See also  Sony confirms PS Plus subscription stacking has been banned

“Detecting and mitigating logical errors or hallucinations in a model is a critical step towards building compliant AI.” [or artificial general intelligence],” Carl Kopp, a mathematician at OpenAI, told CNBC, noting that although OpenAI didn’t invent the process moderation approach, the company is helping drive it forward. The impetus for this research is to address hallucinations in order to make models that are more capable of solving difficult thinking problems.”

Cobb said that OpenAI released an accompanying dataset of 800,000 human labels that it used to train the model mentioned in the paper.

Ben Winters, a senior advisor at the Electronic Privacy Information Center and project lead for AI and Human Rights, expressed skepticism, telling CNBC he would be interested to see the full dataset and accompanying examples.

“I don’t think this alone significantly alleviates concerns about misinformation and incorrect results…when it’s actually used in the wild,” Winters said. He added: “It is certainly important whether they plan to implement what they find through their research here [into their products]And, if they are not, that brings up some fairly serious questions about what they are willing to go public with.”

Because it’s not clear whether the OpenAI paper has been peer-reviewed or peer-reviewed in another format, Suresh Venkatasubramanian, director of the Center for Technical Responsibility at Brown University, told CNBC that he views the research more as an initial observation than anything else.

“This is going to need a change in the research community before we can say anything definite about this,” Venkatasubramanian said. “In this world, there are a lot of outcomes that come out very regularly, and because of a general instability in how large language models work, what might work in one setting, paradigm, and context might not work in another setting, paradigm, and context.”

See also  Fallout London, the DLC remaster of Fallout 4, gets a new release date closer to the Prime Video Show

Venkatasubramanian added: “Some of the hallucinogenic things that people have been worried about are [models] Configure citations and references. There is no evidence in this paper that this will work for that… I’m not saying it won’t work; I am saying that this paper does not provide that evidence.”

OpenAI did not respond to a request for comment asking if the research was externally reviewed in any capacity, or when, if at all, the company plans to implement the new strategy in ChatGPT and its other products.

“It’s certainly welcome to see companies trying to tinker with the development of their systems to try to minimize these kinds of errors — I think the key is to interpret this as joint research, given the many barriers that exist to deeper forms of accountability,” said Sarah Myers-West, managing director of the Institute of Technology. AI Now, for CNBC.

West added,[OpenAI is] A small human-level feedback dataset was released with this paper, but it didn’t provide key details about the data used to train and test GPT-4. So there remains a huge amount of opacity that challenges any meaningful AI accountability efforts, even as these systems are already directly affecting people.”

Leave a Reply

Your email address will not be published. Required fields are marked *