Yes, there are the warnings that ChatGPT sometimes pulls/makes up fake data (I've seen nonexistent book references in reporters' posts), but this time it was nonexistent case law. With more and more people experimenting with ChatGPT, I expect to see more and more examples of this. One thus far was the PR firm that used ChatGPT to pitch reporters and mentioned a non-existent book and had to come clean that the pitches were AI generated. That's not as bad as the lawyer example in the article, but automation can breed laziness and mistakes are just going to become more common. https://buff.ly/3oD2rOg

via Buffer
Comments
No comments

Post a Comment