Googlers say Bard AI is ‘worse than useless’, ethical concerns ignored – Ars Technica

From the outside, Google Bard looks like a rushed product to try and compete with ChatGPT, and some Googlers share that sentiment. New report from bloomberg He interviewed 18 current and former workers and came up with a pile of comments and concerns about AI ethics teams that were “demoralized and demoralized” until Google could get Bard out the door.

According to the report, Google employees who pre-tested the Bard version were asked to get their feedback, which was mostly ignored so Bard could launch faster. Internal discussions seen by Bloomberg called Bard “deserving of a step back” and a “pathological liar.” When asked how to land the plane, he gave wrong instructions that would lead to the plane crashing. A staff member asked for diving instructions and got an answer that they said “likely to result in serious injury or death”. One employee wrapped up Bard’s problems in a February post titled, “Cold Is Worse Than Useless: Please Don’t Get Fired.” Bard launched in March.

You could probably say a lot of the same things about the AI ​​competitor Google is chasing, OpenAI’s ChatGPT. Both can give biased or false information and hallucinate incorrect answers. Google lags far behind ChatGPT, and the company is aghast at ChatGPT’s ability to answer questions people might otherwise type into a Google search. OpenAI, the creator of ChatGPT, has been criticized for taking a lax approach to AI safety and ethics. Now Google finds itself in a difficult position. If a company’s only concern is to appease the stock market and catch up with ChatGPT, it likely won’t be able to do that if it slows down to consider ethical issues.

See also  Todd Howard says Starfield is "great" on the Xbox Series S.

Meredith Whitaker, a former Google director and chair of the Signal Foundation, tells Bloomberg that “AI ethics have taken a back seat” at Google and says that “if ethics aren’t in a position to take priority over profit and growth, then it ultimately won’t work.” Several Google AI ethics leaders have been fired or left the company in recent years. Bloomberg says that AI ethics reviews today are “almost entirely voluntary” at Google.

While you could do something at Google to try and slow down releases due to ethical issues, it probably won’t do any good for your career. The report says: “One former employee said they were asked to work on doing machine learning justice and were routinely frustrated — so much so that it affected their performance review. Managers protested that it was getting in the way of their ‘real work.'”

Leave a Reply

Your email address will not be published. Required fields are marked *