Worth Reading – Trust Networks as Antidote to AI Slop
/
We’ve seen a lot of discussion about AI projects not showing significant ROI, and this may be because the results are not trustworthy, which raises the question of whether the data being modeled is reliable. If AI is surfacing incorrect information because the model was built on outdated and inaccurate data, the problem lies with the source of the data.
I don’t particularly care what side of the political aisle you’re on. If we can’t all agree that using fake images and videos in situations like this is beyond the pale, we are sunk. If law enforcement can create evidence and get away with it, no one is free.
We are all one image away from prison. Is that the world AI is creating? ?
For more like this, subscribe to the newsletter and get these links and more in your email.
For more like this, subscribe to the newsletter and get these links and more in your email.
This story seems inevitable to me. As the tech world pushes us increasingly into asking an AI chatbot for information instead of looking it up ourselves, and the AI model has incorrect information, where do you get it corrected? When OpenAI was asked to correct or remove this misinformation, they said it was ‘technically impossible’…