Multiple Studies Now Suggest That AI Will Make Us Morons

The EU just issued guidelines for AI safety​

The AI Act applies to AI models that the commission deems to carry systemic risks that could significantly affect "public health, safety, fundamental rights, or society,"


How it was done:

And the Code of Practice is here:
 

92 Percent of People Don’t Check Their AI Answers, a New Report Warns​

It’s time to think about best practices for using artificial intelligence.

You’d think by now people would know that they really can’t trust large language model (LLM) artificial intelligence apps like ChatGPT to provide accurate information and would carefully check any answers they got from AI tools. After all, there are already plenty of stories of chatbots hallucinating and outright lying to users, sometimes with disastrous consequences.

But a recent report by Exploding Topics, a trendspotting company, suggests that despite people’s knowledge of AI hallucination problems and their own skepticism regarding these tools, only a measly 8 percent actually check the answers they get from AI themselves.
 

92 Percent of People Don’t Check Their AI Answers, a New Report Warns​

It’s time to think about best practices for using artificial intelligence.

You’d think by now people would know that they really can’t trust large language model (LLM) artificial intelligence apps like ChatGPT to provide accurate information and would carefully check any answers they got from AI tools. After all, there are already plenty of stories of chatbots hallucinating and outright lying to users, sometimes with disastrous consequences.

But a recent report by Exploding Topics, a trendspotting company, suggests that despite people’s knowledge of AI hallucination problems and their own skepticism regarding these tools, only a measly 8 percent actually check the answers they get from AI themselves.
Including here: https://www.hifivision.com/threads/chatgpt-vs-audio-forums.99107/
 
@essrand is a very experienced and passionate audiophile. I read and enjoy his well written accounts detailing his process of choosing components and his personal observations in a frank and lucid manner. I hope he will write about how and what he finally chose to replace his Nagra and other components and, if and how the ChatGPT recommendations he asked and received influenced these choices.
 
In the field of bio medical and public health research for policy recommendations there is a graded system that is used by the WHO, and others. Based on several criteria and evidence such recommendations are put forth as
“Strongly recommended” or “Recommendations with conditions etc”
Such papers are accessible to everyone and the data sources are accessible to anyone who wants to check their validity. The process used is also documented as also disclosure of affiliations of each author.

Maybe the “masters of AI” need to be educated and compelled on such ethical practices and move the AI movement towards transparency in disclosing how AI recommendations are made in every case.
But that would not lead to profits and wealth for the masters of AI?
 
Wharfedale Linton Heritage Speakers in Walnut finish at a Special Offer Price. BUY now before the price increase.
Back
Top