Page 5 of 5 [ 70 posts ]  Go to page Previous  1, 2, 3, 4, 5

ToughDiamond
Veteran
Veteran

User avatar

Joined: 15 Sep 2008
Age: 71
Gender: Male
Posts: 11,958

20 Oct 2024, 7:20 am

^
But isn't it just a matter of using other sources to check what it says? Ultimately, all sources of information have the potential to be incorrect, and we can never be absolutely certain we have accurate information.



ToughDiamond
Veteran
Veteran

User avatar

Joined: 15 Sep 2008
Age: 71
Gender: Male
Posts: 11,958

27 Oct 2024, 9:08 am

I was a tad surprised the other day when I asked ChatGPT to tell me which questions on the AQ test were about numbers. It refused to list them, on the grounds that they were "copyright material." Even if that's technically true, the AQ test is bandied about in the public domain so much with no attempt made by its author to arrest the "pirates," that it beggars belief that anybody or anything could have the faintest reservations about communicating a few of the test's questions to another person.



Fenn
Veteran
Veteran

User avatar

Joined: 1 Sep 2014
Gender: Male
Posts: 2,965
Location: Pennsylvania

27 Oct 2024, 10:47 am

The thing about English (or any natural human language) is that context and related concepts matter. One NL (natural language) researcher had his grad students reading magazine and newspaper articles and then typing in facts someone would already have to know in order to understand the articles. With modern internet there is so much text already available this kind of thing may not be needed. Multiple dictionaries and thesaurus and encyclopedias could be fed into a set of training documents plus examples of written articles and internet chat or forums.

ChatGPT will discuss itself with you if you ask the right questions. It will mention it uses an ANN (artificial neural network) and relies on an algorithm from a paper “Attention Is All You Need”

Me: What is “attention is all you need”

G:
“Attention Is All You Need” is a groundbreaking 2017 research paper by Google Brain researchers, including Ashish Vaswani and others, that introduced the Transformer architecture, which has since become foundational for modern natural language processing (NLP) models.

The key ideas introduced in the paper are:

1. Self-Attention Mechanism: Unlike previous models that relied on recurrent neural networks (RNNs) or convolutional neural networks (CNNs) to capture sequential relationships in data, the Transformer model uses self-attention. Self-attention allows each word in a sentence to “attend” to all other words, regardless of their position in the sequence, capturing context more effectively.
2. Parallelization: Transformers are designed to process entire sentences or sequences in parallel, making them much faster than RNN-based models, which process data sequentially.
3. Encoder-Decoder Structure: The original Transformer has an encoder that reads the input data (like a sentence) and a decoder that generates the output (like a translated sentence). Both the encoder and decoder layers heavily use the self-attention mechanism to process and generate contextual embeddings of words.
4. Positional Encoding: Since the Transformer processes all tokens in a sequence simultaneously, it doesn’t inherently understand the order of words. To address this, it adds positional encodings to represent the order of words in the sequence.

This model was revolutionary because it significantly improved performance on tasks like machine translation, summarization, and question answering, leading to the development of models like BERT, GPT, and T5, which have transformed NLP and AI as a whole. The phrase “Attention Is All You Need” reflects the core concept that this self-attention mechanism could capture relationships and dependencies in data without the need for traditional RNN or CNN architectures.


_________________
ADHD-I(diagnosed) ASD-HF(diagnosed)
RDOS scores - Aspie score 131/200 - neurotypical score 69/200 - very likely Aspie


ToughDiamond
Veteran
Veteran

User avatar

Joined: 15 Sep 2008
Age: 71
Gender: Male
Posts: 11,958

31 Oct 2024, 2:19 pm

Today for the first time ever, ChatGPT asked me what I thought about a subject I'd been asking it about. "What are your thoughts on it?" it said.

I was rather surprised because it's never done anything but answer my questions before, and I've never asked it to ask me anything, all I've ever done is to ask it questions.



Carbonhalo
Veteran
Veteran

User avatar

Joined: 26 Nov 2007
Age: 63
Gender: Male
Posts: 1,984
Location: Musoria

31 Oct 2024, 2:52 pm

I've only used AI once... And it fisted the response so badly that I spent more time correcting than if I'd researched myself



ToughDiamond
Veteran
Veteran

User avatar

Joined: 15 Sep 2008
Age: 71
Gender: Male
Posts: 11,958

31 Oct 2024, 9:31 pm

^
Can you remember what you asked it? One notable time for me was when I asked it to identify the B side of a certain pop single:

ME: What was the single by Spike Milligan that had a B side on which there was just silence with a comment at the end saying something like "there's nothing here you know" ?

CHATGPT: The single you're referring to is "Labio-Dental Fricative" by Spike Milligan. The B-side of this record featured a track titled "Nothing at All," where there was indeed silence with Milligan's voice at the end saying, "There's nothing here, you know!" It was a humorous and unconventional release by the legendary comedian and writer.

It got the name of the B side right, but the A side was "Purple Aeroplane" - the single was released in 1966, while "Labio-Dental Fricative" was by The Sean Head Showband, and wasn't released until 1970. Nor is "Nothing At All" a silent record with Milligan's voice just at the end. It's full of talking and sound effects, and Milligan's voice is audible right from the start. I'd wrongly said that it was when I asked the question, because I'd never heard it at the time, I was just going on what I'd remembered having been told about it back in the 1960s.

I suspect, then, that if I feed ChatGPT false information in a question, it can somehow get tricked into believing it. It also doesn't seem able to admit when it doesn't know the answer to a question.