Skip to main content
25 events
when toggle format what by license comment
Jan 23, 2023 at 22:11 comment added mirabilos @user31389 no, it’s transformed. “Transform” does not mean it stores the training dataset literally; it’s sufficient that it stores the training dataset in a transformed form (which is executed by software running on a deterministic computer). People have been able to extract sufficiently detailled traning data from these systems, which proves that this is enough.
Jan 21, 2023 at 1:00 history edited CommunityBotStaff
Jan 20, 2023 at 21:45 comment added Braiam @user31389 I expect examples of Stack Exchange questions and answers, or at least Questions and answers. Those are just conversations, not inline with the format of the sites.
Jan 20, 2023 at 1:37 comment added user31389 @Braiam See these links for examples of mistakes: en.wikipedia.org/wiki/ChatGPT#Negative_reactions reddit.com/r/ChatGPT/comments/zd7l8t/nice reddit.com/r/ChatGPT/comments/zpabrh/… reddit.com/r/ChatGPT/comments/101e454/… reddit.com/r/ChatGPT/comments/10g6k7u/… aiweirdness.com/botsplaining aiweirdness.com/baltimore-orioles-effect
Jan 20, 2023 at 1:11 comment added Braiam @user31389 do you have examples of generating wrong answers?
Jan 19, 2023 at 21:57 comment added user31389 @mirabilos It is generated. "Transform" would mean that ChatGPT stores the training dataset and later draws from it when answering questions. But it doesn't do that. It learns and then later generates answers from what it has learned. If this is transformation then human answers are also transformations. But I agree that it is just a text prediction system and it can easily generate wrong answers while sounding very confident.
Jan 15, 2023 at 15:20 vote accept Jeff SchallerMod
Jan 13, 2023 at 20:32 comment added mirabilos ChatGPT is basically just predictive text, and easily wrong in detail. But its output - NOT “generated” but transformed content - is a derivative of all of its inputs, and therefore usually illegal. I fully support the blanket ban on ML (so-called “AI”) content.
Jan 3, 2023 at 10:23 answer added iBug timeline score: 7
Jan 1, 2023 at 15:32 comment added Sam Watkins OpenAI is supposedly working on a statistical / cryptographic "watermark" for ChatGPT, so it would be possible to spot AI answers by checking for that watermark, if they give us a means to do so. Of course, it would also be possible to remove the watermark by running the output through a program to adjust it.
Dec 29, 2022 at 5:30 answer added stutopp timeline score: -4
Dec 27, 2022 at 20:43 comment added schrodingerscatcuriosity Now all we need an AI that can spot AI answers. Problem solved ^^.
Dec 27, 2022 at 17:23 answer added Braiam timeline score: -6
Dec 18, 2022 at 19:06 history edited Jeff SchallerMod
edited tags
Dec 14, 2022 at 13:07 comment added Philip Couling @schrodingerscatcuriosity Thanks, that's a good read. What made me think of it was my colleague's experience with copilot suddenly suggesting significant blocks of code (10+ lines) which even included a comment making it clear which project it had been scraped from. The really scary thing is what happens when you can't trace it.
Dec 14, 2022 at 12:52 comment added schrodingerscatcuriosity @PhilipCouling Something related is going on with AI art.
Dec 14, 2022 at 2:55 comment added Philip Couling On this I have a point of curiosity over licensing and copyright. I wonder about the risk of AI generated answers producing actual content scraped from uncited sources. This would raise a very significant licensing concern if it's then published under CC-BY-SA 4 At least with copy-paste wikipedia answers this issue is fairly clear cut.
Dec 14, 2022 at 2:44 answer added Philip Couling timeline score: 14
Dec 13, 2022 at 8:30 answer added dr_ timeline score: 10
Dec 9, 2022 at 19:45 answer added Wildcard timeline score: 46
Dec 7, 2022 at 14:36 history became hot meta post
Dec 7, 2022 at 9:57 answer added KusalanandaMod timeline score: 0
Dec 7, 2022 at 8:28 answer added MC68020 timeline score: 7
Dec 6, 2022 at 19:57 comment added Sotto Voce My input is: Prohibit. Expert systems are fine when a question includes complete and accurate details, and the question is well formed. My long experience in phone tech support for computer systems, and my experiences here since June are that U&L site questions often lack sufficient/accurate info, and the question asked is unclear. Extracting the needed details and clarity requires a question-and-answer exchange with the OP. The percentage of questions needing this conversation is too high for expert system answers to be beneficial. IMO
Dec 6, 2022 at 17:43 history asked Jeff SchallerMod CC BY-SA 4.0