Skip to main content
minor fixes
Source Link
terdon Mod
  • 252.9k
  • 3
  • 88
  • 143

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven more meaningful than it first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this set'ssets its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information3.

Unlike wikipediaWikipedia, ChatGPT answers are very hard to trace. With copy-paste answers from WikipedeaWikipedia, we can not only trace the origin of bad answers, but actually go and correct it at source!4. As far as I know, ChatGPT has no such capability.

The shearsheer volumes that have been seen make them a real problem that needs to be dealt with firmly.


Thanks to Kamil Maciorowski for this comment:

If this answer is true then it's very relevant.

That answer nails it. Discussing with those I know in the field, I believe that answer is very true. These AIs are super smart at word play. Really very smart. But they are not conscious. Not yet.

EgE.g: The last time I heard "entity linking" across many unconnected sources remains a bit of an unsolved problem. If you see a name "Mickey Mouse" in a document then it's hard to be sure if the document was discussing the Disney character or using it as a euphemism like "Mickey Mouse operation" to mean silly or poorly run.

Besides that, AI has made some amazing advances in recent years with various "models" for various specific tasks: image recognition, image generation, text generation. And logical reasoning has long been relatively trivial in AI. But one thing that remains frustratingly out of reach is a good way to connect these different models into a single system.

In short people should not hold their breath waiting for a really great language model to be connected to a really great logical reasoning engine.


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

IEI.e.: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 4My only ever wikipedia update came from just this case.

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven more meaningful than it first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this set's its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information3.

Unlike wikipedia ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedea, we can not only trace the origin of bad answers, but actually go and correct it at source!4. As far as I know ChatGPT has no such capability.

The shear volumes that have been seen make them a real problem that needs to be dealt with firmly.


Thanks to Kamil Maciorowski for this comment:

If this answer is true then it's very relevant.

That answer nails it. Discussing with those I know in the field, I believe that answer is very true. These AIs are super smart at word play. Really very smart. But they are not conscious. Not yet.

Eg: The last time I heard "entity linking" across many unconnected sources remains a bit of an unsolved problem. If you see a name "Mickey Mouse" in a document then it's hard to be sure if the document was discussing the Disney character or using it as a euphemism like "Mickey Mouse operation" to mean silly or poorly run.

Besides that, AI has made some amazing advances in recent years with various "models" for various specific tasks: image recognition, image generation, text generation. And logical reasoning has long been relatively trivial in AI. But one thing that remains frustratingly out of reach is a good way to connect these different models into a single system.

In short people should not hold their breath waiting for a really great language model to be connected to a really great logical reasoning engine.


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

IE: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 4My only ever wikipedia update came from just this case.

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven more meaningful than it first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this sets its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information3.

Unlike Wikipedia, ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedia, we can not only trace the origin of bad answers, but actually go and correct it at source!4. As far as I know, ChatGPT has no such capability.

The sheer volumes that have been seen make them a real problem that needs to be dealt with firmly.


Thanks to Kamil Maciorowski for this comment:

If this answer is true then it's very relevant.

That answer nails it. Discussing with those I know in the field, I believe that answer is very true. These AIs are super smart at word play. Really very smart. But they are not conscious. Not yet.

E.g: The last time I heard "entity linking" across many unconnected sources remains a bit of an unsolved problem. If you see a name "Mickey Mouse" in a document then it's hard to be sure if the document was discussing the Disney character or using it as a euphemism like "Mickey Mouse operation" to mean silly or poorly run.

Besides that, AI has made some amazing advances in recent years with various "models" for various specific tasks: image recognition, image generation, text generation. And logical reasoning has long been relatively trivial in AI. But one thing that remains frustratingly out of reach is a good way to connect these different models into a single system.

In short people should not hold their breath waiting for a really great language model to be connected to a really great logical reasoning engine.


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

I.e.: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 4My only ever wikipedia update came from just this case.
added 1300 characters in body
Source Link
Philip Couling
  • 21.1k
  • 13
  • 24

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven more meaningful than it first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this set's its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information3.

Unlike wikipedia ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedea, we can not only trace the origin of bad answers, but actually go and correct it at source!4. As far as I know ChatGPT has no such capability.

The shear volumes that have been seen make them a real problem that needs to be dealt with firmly.


Thanks to Kamil Maciorowski for this comment:

If this answer is true then it's very relevant.

That answer nails it. Discussing with those I know in the field, I believe that answer is very true. These AIs are super smart at word play. Really very smart. But they are not conscious. Not yet.

Eg: The last time I heard "entity linking" across many unconnected sources remains a bit of an unsolved problem. If you see a name "Mickey Mouse" in a document then it's hard to be sure if the document was discussing the Disney character or using it as a euphemism like "Mickey Mouse operation" to mean silly or poorly run.

Besides that, AI has made some amazing advances in recent years with various "models" for various specific tasks: image recognition, image generation, text generation. And logical reasoning has long been relatively trivial in AI. But one thing that remains frustratingly out of reach is a good way to connect these different models into a single system.

In short people should not hold their breath waiting for a really great language model to be connected to a really great logical reasoning engine.


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

IE: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 4My only ever wikipedia update came from just this case.

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven more meaningful than it first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this set's its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information3.

Unlike wikipedia ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedea, we can not only trace the origin of bad answers, but actually go and correct it at source!4. As far as I know ChatGPT has no such capability.

The shear volumes that have been seen make them a real problem that needs to be


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

IE: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 4My only ever wikipedia update came from just this case.

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven more meaningful than it first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this set's its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information3.

Unlike wikipedia ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedea, we can not only trace the origin of bad answers, but actually go and correct it at source!4. As far as I know ChatGPT has no such capability.

The shear volumes that have been seen make them a real problem that needs to be dealt with firmly.


Thanks to Kamil Maciorowski for this comment:

If this answer is true then it's very relevant.

That answer nails it. Discussing with those I know in the field, I believe that answer is very true. These AIs are super smart at word play. Really very smart. But they are not conscious. Not yet.

Eg: The last time I heard "entity linking" across many unconnected sources remains a bit of an unsolved problem. If you see a name "Mickey Mouse" in a document then it's hard to be sure if the document was discussing the Disney character or using it as a euphemism like "Mickey Mouse operation" to mean silly or poorly run.

Besides that, AI has made some amazing advances in recent years with various "models" for various specific tasks: image recognition, image generation, text generation. And logical reasoning has long been relatively trivial in AI. But one thing that remains frustratingly out of reach is a good way to connect these different models into a single system.

In short people should not hold their breath waiting for a really great language model to be connected to a really great logical reasoning engine.


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

IE: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 4My only ever wikipedia update came from just this case.
deleted 2 characters in body
Source Link
Philip Couling
  • 21.1k
  • 13
  • 24

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven true. There is indeed a very large gap between convincing humans that an AI understands something and the AI actually understandingmore meaningful than it. first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this set's its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information23.

Unlike wikipedia ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedea, we can not only trace the origin of bad answers, but actually go and correct it at source!34. As far as I know ChatGPT has no such capability.

The shear volumes that have been seen make them a real problem that needs to be


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

IE: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 34My only ever wikipedia update came from just this case.

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven true. There is indeed a very large gap between convincing humans that an AI understands something and the AI actually understanding it.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this set's its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information2.

Unlike wikipedia ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedea, we can not only trace the origin of bad answers, but actually go and correct it at source!3. As far as I know ChatGPT has no such capability.

The shear volumes that have been seen make them a real problem that needs to be


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

IE: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2Wikipedia even has a ban on original research
  • 3My only ever wikipedia update came from just this case.

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven more meaningful than it first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this set's its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information3.

Unlike wikipedia ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedea, we can not only trace the origin of bad answers, but actually go and correct it at source!4. As far as I know ChatGPT has no such capability.

The shear volumes that have been seen make them a real problem that needs to be


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

IE: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 4My only ever wikipedia update came from just this case.
deleted 2 characters in body
Source Link
Philip Couling
  • 21.1k
  • 13
  • 24
Loading
added 16 characters in body
Source Link
Philip Couling
  • 21.1k
  • 13
  • 24
Loading
edited body
Source Link
Philip Couling
  • 21.1k
  • 13
  • 24
Loading
Source Link
Philip Couling
  • 21.1k
  • 13
  • 24
Loading