Has 'Chat GPT' proven helpful?
I was a tad surprised the other day when I asked ChatGPT to tell me which questions on the AQ test were about numbers. It refused to list them, on the grounds that they were "copyright material." Even if that's technically true, the AQ test is bandied about in the public domain so much with no attempt made by its author to arrest the "pirates," that it beggars belief that anybody or anything could have the faintest reservations about communicating a few of the test's questions to another person.
The thing about English (or any natural human language) is that context and related concepts matter. One NL (natural language) researcher had his grad students reading magazine and newspaper articles and then typing in facts someone would already have to know in order to understand the articles. With modern internet there is so much text already available this kind of thing may not be needed. Multiple dictionaries and thesaurus and encyclopedias could be fed into a set of training documents plus examples of written articles and internet chat or forums.
ChatGPT will discuss itself with you if you ask the right questions. It will mention it uses an ANN (artificial neural network) and relies on an algorithm from a paper “Attention Is All You Need”
Me: What is “attention is all you need”
G:
“Attention Is All You Need” is a groundbreaking 2017 research paper by Google Brain researchers, including Ashish Vaswani and others, that introduced the Transformer architecture, which has since become foundational for modern natural language processing (NLP) models.
The key ideas introduced in the paper are:
1. Self-Attention Mechanism: Unlike previous models that relied on recurrent neural networks (RNNs) or convolutional neural networks (CNNs) to capture sequential relationships in data, the Transformer model uses self-attention. Self-attention allows each word in a sentence to “attend” to all other words, regardless of their position in the sequence, capturing context more effectively.
2. Parallelization: Transformers are designed to process entire sentences or sequences in parallel, making them much faster than RNN-based models, which process data sequentially.
3. Encoder-Decoder Structure: The original Transformer has an encoder that reads the input data (like a sentence) and a decoder that generates the output (like a translated sentence). Both the encoder and decoder layers heavily use the self-attention mechanism to process and generate contextual embeddings of words.
4. Positional Encoding: Since the Transformer processes all tokens in a sequence simultaneously, it doesn’t inherently understand the order of words. To address this, it adds positional encodings to represent the order of words in the sequence.
This model was revolutionary because it significantly improved performance on tasks like machine translation, summarization, and question answering, leading to the development of models like BERT, GPT, and T5, which have transformed NLP and AI as a whole. The phrase “Attention Is All You Need” reflects the core concept that this self-attention mechanism could capture relationships and dependencies in data without the need for traditional RNN or CNN architectures.
_________________
ADHD-I(diagnosed) ASD-HF(diagnosed)
RDOS scores - Aspie score 131/200 - neurotypical score 69/200 - very likely Aspie
Today for the first time ever, ChatGPT asked me what I thought about a subject I'd been asking it about. "What are your thoughts on it?" it said.
I was rather surprised because it's never done anything but answer my questions before, and I've never asked it to ask me anything, all I've ever done is to ask it questions.
^
Can you remember what you asked it? One notable time for me was when I asked it to identify the B side of a certain pop single:
ME: What was the single by Spike Milligan that had a B side on which there was just silence with a comment at the end saying something like "there's nothing here you know" ?
CHATGPT: The single you're referring to is "Labio-Dental Fricative" by Spike Milligan. The B-side of this record featured a track titled "Nothing at All," where there was indeed silence with Milligan's voice at the end saying, "There's nothing here, you know!" It was a humorous and unconventional release by the legendary comedian and writer.
It got the name of the B side right, but the A side was "Purple Aeroplane" - the single was released in 1966, while "Labio-Dental Fricative" was by The Sean Head Showband, and wasn't released until 1970. Nor is "Nothing At All" a silent record with Milligan's voice just at the end. It's full of talking and sound effects, and Milligan's voice is audible right from the start. I'd wrongly said that it was when I asked the question, because I'd never heard it at the time, I was just going on what I'd remembered having been told about it back in the 1960s.
I suspect, then, that if I feed ChatGPT false information in a question, it can somehow get tricked into believing it. It also doesn't seem able to admit when it doesn't know the answer to a question.
Similar Topics | |
---|---|
Drinking Tea Every Day Is Proven to Delay Aging in Humans |
11 Oct 2024, 9:43 am |
Chat GPT - do you use it for fun? |
19 Oct 2024, 4:10 pm |