AIs are more likely to mislead people if trained on human feedback 

[ad_1]

AIs are more likely to mislead people if trained on human feedback 

Striving to come up with answers that please humans may make chatbots more likely to pull the wool over our eyes

JuSun/Getty Images

Giving AI chatbots human feedback on their responses seems to make them better at giving convincing, but wrong, answers.

The raw output of large language models (LLMs), which power chatbots like ChatGPT, often contains biased, harmful or irrelevant information, and their style of interaction can seem unnatural to humans. To get around this, developers often get people to evaluate a model’s responses and then fine-tune it based on this feedback.

[ad_2]

Source link

#AIs #mislead #people #trained #human #feedback

Related posts

Scientists May Have Finally Solved the Sun’s Mysteriously Hot Atmosphere Puzzle

Magnetic gel could remove kidney stones more effectively

Ancient enzyme structure reveals new path to sustainable ethylene production