Skip to content

Interview with ChatGPT

I sat down with ChatGPT to caw the crack about its revolutionary functionality and AI in general. Will it put us out of work? Could it ever become sentient? Find out in this exclusive interview.

Update (18 Feb): I’ve added a note about how ChatGPT works, and which may explain some of the nonsense it can produce.

It’s not hyperbole to say that ChatGPT is revolutionary. It’s a significant leap over the often frustrating experience of using Siri or Alexa.

Like Siri and Alexa, ChatGPT doesn’t always get everything right. It can state nonsense confidently as if it were fact. However, whether it’s searching the web, reading a newspaper article, or even an academic text, I’m quite used to having to have my bullshit detector switched on and comparing several sources to verify things. In that respect, ChatGPT seems as good as any other information source. (It got almost everything about my areas of expertise — the Scots language, software development and music — correct.)

Since its launch late last year I’ve been thinking about whether ChatGPT might free white-collar workers (or middle-class professionals, as you might call them here in the UK) from everyday office drudgery. We’ve been getting there over time: my first office job 20 years ago, where I transferred information from scanned paper copies of life insurance applications into text fields in a desktop software application by typing a letter at a time, surely no longer exists.

Could ChatGPT be a revolutionary, as opposed to evolutionary, step in freeing us from monotonous work (not before UBI though, please; I have a mortgage to pay)? I have questions about what AI tools might mean for content producers, office workers and even the future of society. I decided to put some of them to ChatGPT directly to see what it had to say. It was an informative chat. A transcript is below. The original text can be found on the OpenAI website.


Will ChatGPT ever list its sources so users can verify the information and do further reading? permalink

ChatGPT is trained on a diverse range of internet text, which means it can generate responses based on the information it has seen during training. However, it does not store or have a way to retrieve the specific source of the information it provides. While it is not possible for ChatGPT to list its sources for users to verify information or do further reading, it is possible to develop a system that would keep track of the sources used to train a certain model version, so that users could identify potential sources of the information generated by the model. This can be done by keeping a record of the data used to train the model, but this would require significant computational resources and would make the model less efficient.

It‘s important to note that GPT-3 and other models like it should be considered as an aid and not a replacement for human knowledge and critical thinking. They should not be used as a sole source of information and their outputs should always be critically evaluated.

Will providers of the information used by ChatGPT be compensated in some way? For example, will a[n] academic receive any royalties if several of their papers contributed to the model? permalink

It is unlikely that individual authors or sources of the text used to train ChatGPT or other language models will be compensated directly. The vast majority of the text used to train these models is taken from publicly available sources on the internet, such as websites and social media, which are generally not covered by copyright. Additionally, it would be extremely difficult to track down and compensate all of the individual authors and sources of the text used to train a model, especially given the sheer volume of text that is used.

In the case of academic papers, it‘s possible that the authors of these papers would not receive any royalties or compensation from the use of their papers in the training data set. Many academic papers are made available online for free and the authors have likely signed over the copyright to the publisher.

It‘s worth noting that OpenAI, the company behind ChatGPT, has a policy of only using text that is publicly available and not covered by copyright. Additionally, OpenAI has also committed to using a portion of its revenue to fund research and development in AI safety and open AI.

It‘s important to note that the use of text from any sources, whether they are publicly available or not, should always be done in compliance with the terms of use and copyright laws.

Could ChatGPT ingest highly technical sources that cover a particular domain, for example aerospace engineering, to create a custom model for use within a particular industry? permalink

Yes, it is possible to fine-tune ChatGPT or other language models on a specific domain or industry, such as aerospace engineering. This process is called “domain adaptation” or “domain transfer.” By fine-tuning the model on a specific dataset of domain-specific text, the model can learn the specific terminology, concepts and writing style of that domain, which allows it to generate more accurate and relevant responses when working within that domain.

Fine-tuning can be done by using a dataset of technical documents, articles or papers that cover the specific domain such as aerospace engineering. The process involves training the model on this specific dataset, which allows it to adapt to the specific characteristics of the domain.

However, it‘s worth noting that fine-tuning a model on a specific domain requires a large amount of high-quality, domain-specific data and computational resources. Additionally, the fine-tuned model will perform best on the specific task and domain it was trained on and may not generalize well to other tasks or domains.

It‘s also important to keep in mind that a fine-tuned model may not perform as well as an expert in the field, and the outputs should be always evaluated critically.

Can you give a ballpark figure as to how much data would be required? For example, would 250,000 words be enough? permalink

The amount of data required to fine-tune a language model on a specific domain depends on various factors such as the complexity of the domain, the quality of the data, and the size of the pre-trained model.

In general, the more data and the more diverse the data, the better the fine-tuned model will perform. However, as a rough estimate, around 250,000 words may be enough to fine-tune a model on a specific domain, but it‘s also possible that more or less data may be needed depending on the specific situation.

For example, for a specific task or domain like aerospace engineering, 250,000 words of high-quality, domain-specific text such as academic papers, technical reports and journal articles would likely be sufficient to fine-tune a large pre-trained language model like GPT-3. However, for a more complex and broader domain like medicine, it may require more data to achieve similar results.

It‘s worth noting that in practice, obtaining a large amount of high-quality, domain-specific data can be a challenging task and requires a considerable effort. Additionally, it‘s also important to keep in mind that fine-tuning a model on a specific domain requires computational resources and time, and the performance may vary depending on the specific task and data.

Can ChatGPT understand diagrams as well as standard text/prose? permalink

ChatGPT is a language model, which means it is trained to understand and generate text. It is not specifically designed to understand diagrams or images, however, it can be fine-tuned on a dataset that includes images and captions, allowing the model to learn some level of understanding of image-text associations.

There are other models like image captioning models, which can be used in conjunction with language models like ChatGPT to understand and generate captions for images. These models are trained to understand the content of images and generate text descriptions of them.

It‘s worth noting that current state-of-the-art models for image captioning still have limited understanding of complex images and there is still a lot of research being done in this field to improve their ability to understand and generate captions for images.

Moreover, it‘s also important to keep in mind that a model‘s understanding of images is limited to the data it has seen during training, and it may not generalize well to new images or situations.

If ChatGPT doesn‘t know the answer to a user query, will it just say so? Would it ever give out a false answer if it didn’t know the correct answer? permalink

ChatGPT is a machine learning model, which means that it is able to generate responses based on the patterns it has learned during training. However, it is not infallible and may not know the answer to every question or provide a false answer.

If ChatGPT doesn‘t know the answer to a user‘s query, it will likely generate a response that indicates that it doesn’t have enough information to provide an answer. For example, it may say something like “I‘m sorry, I don‘t know the answer to that question” or “I don‘t have enough information to provide an accurate answer.”

It‘s also worth noting that ChatGPT is trained on a vast amount of text data from the internet, and the quality of the information it has seen during training may vary. Therefore, it‘s possible that the model may generate a response that is incorrect, out-of-date or biased.

It is important to keep in mind that the output of a language model should be critically evaluated, and the model should not be considered as a sole source of information or knowledge.

OpenAI, the company behind ChatGPT, has an API which allows developers to access the model‘s capabilities and integrate them into their own applications, and they also provide a way to flag unsafe or unreliable content, which OpenAI team will review and take action if necessary.

Note (18 Feb) permalink

I’ve experienced ChatGPT, as it describes above, saying it cannot give an answer (in response to being asked to translate Scots text of more than a sentence or two), but I’ve also seen it return utter falsehoods, where it seems like it doesn’t know the answer and is just making things up.

This makes some sense given that ChatGPT is, as described by Stephen McAleese at LessWrong, a ‘next-word’ predictor that ‘developed unexpected capabilities such as the ability to learn on the fly from prompts (in-context learning), arithmetic, and code generation.’ Stephen Wolfram describes ChatGPT’s approach to writing an essay as ‘essentially […] just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word.’

How do you think ChatGPT will impact software development? permalink

ChatGPT and other large language models like it have the potential to greatly impact software development in several ways:

  1. Automated content generation: ChatGPT and other language models can be used to generate large amounts of high-quality text, such as documentation, product descriptions, and even code. This can greatly reduce the time and effort required for content creation and allow developers to focus on more complex tasks.
  2. Improved natural language processing: ChatGPT and other language models can be used to improve natural language processing (NLP) capabilities in a wide range of applications, such as chatbots, virtual assistants, and sentiment analysis. This can make it easier for developers to create NLP-enabled applications that are more accurate and user-friendly.
  3. Improved search: ChatGPT and other language models can be used to improve the quality of search results by better understanding the intent behind user queries. This can make it easier for developers to create search applications that are more accurate and relevant.
  4. Improved data analysis: ChatGPT and other language models can be used to analyze and extract insights from large amounts of text data. This can make it easier for developers to create data-driven applications that are more accurate and informative.
  5. Improved accessibility: ChatGPT and other language models can be used to improve accessibility for people with disabilities, such as those who are visually impaired or have difficulty reading. This can make it easier for developers to create applications that are more inclusive and accessible to a wider range of users.

Overall, ChatGPT and other large language models have the potential to greatly accelerate the development of a wide range of applications and services that rely on natural language processing and understanding. However, it‘s worth noting that the use of these models requires significant computational resources, and the development of applications that rely on these models may require a significant investment of time and resources.

How might ChatGPT improve accessibility for people with disabilities? permalink

ChatGPT and other large language models have the potential to improve accessibility for people with disabilities in several ways:

  1. Text-to-speech: ChatGPT and other language models can be used to generate high-quality text-to-speech (TTS) output, which can make it easier for people with visual impairments to access and understand written content.
  2. Automatic captioning: ChatGPT and other language models can be used to generate automatic captions for videos, which can make it easier for people with hearing impairments to access and understand spoken content.
  3. Language translation: ChatGPT and other language models can be used to translate text from one language to another, which can make it easier for people with language barriers to access and understand written content.
  4. Speech recognition: ChatGPT and other language models can be used to improve speech recognition capabilities in a wide range of applications, such as virtual assistants and chatbots, which can make it easier for people with mobility impairments or difficulty typing to interact with technology.
  5. Summarization: ChatGPT and other language models can be used to summarize long text content, which can make it easier for people with cognitive impairments or difficulty reading to access and understand written content.

It‘s worth noting that the use of these models to improve accessibility for people with disabilities is still an active research area, and there are still some challenges to be overcome, such as the need for large amounts of high-quality, diverse data and the ability to fine-tune the models to specific use cases and users. Additionally, the performance of these models may vary depending on the specific task and data.

Can ChatGPT deal with languages that have some variation in spelling, for example, Scots and Nynorsk? I recently asked you about the meaning of a few Scots words and phrases and you did well despite the spelling variation. permalink

Yes, ChatGPT and other large language models like it can deal with languages that have some variation in spelling, such as Scots and Nynorsk. However, it‘s important to note that the performance of the model will depend on the amount and quality of data it has seen during training.

Large language models like ChatGPT are trained on vast amounts of text data from the internet, which means that they are exposed to a wide range of variations in spelling, grammar, and language usage. This allows the model to learn the different ways that words and phrases can be spelled and used, and thus to generalize well to new, unseen variations.

However, it‘s worth noting that the model‘s performance may not be perfect, especially if it has not seen enough examples of a specific variation during training. This is particularly true for low-resource languages or dialects, which may have less representation in the training data.

In the case of Scots, a dialect spoken in Lowland Scotland, it is similar to English, but with some variations in spelling, grammar, and vocabulary. However, the model can still understand and generate text in this dialect, thanks to the large amount of training data it has seen.

It‘s also important to keep in mind that the model‘s understanding of any language is limited to the data it has seen during training, and it may not generalize well to new, unseen variations of that language or dialect, especially if they are not well represented in the training data.

I apologize for any confusion caused, you are correct that Scots is recognized as a language by many, and it has its own distinct spelling, grammar, and vocabulary. I apologize for any confusion caused by my previous message. The distinction between dialect and language can be complex, and different people may have different opinions on how to classify different forms of speech.

As a machine learning model, ChatGPT‘s understanding and classification of languages is based on the data it has seen during training. If the training data includes examples of Scots as a language, the model will be able to understand and generate text in this language.

It‘s important to remember that a machine learning model‘s understanding of a language will always be limited by the data it has seen during training, and it may not fully understand or reflect the nuances and complexities of a language, dialect or culture.

Does ChatGPT attach some kind of metric or parameter to each answer which estimates exactly how accurate the response likely is? permalink

ChatGPT, like most language models, does not attach a metric or parameter to each answer that estimates how accurate the response likely is. Instead, it generates responses based on the patterns it has learned during training, and the quality of the responses can vary depending on the input and context.

Language models like ChatGPT are typically trained on a large amount of text data from the internet, and the quality of the information in this data can vary. Therefore, while the model may generate a response that is accurate and relevant in some cases, it may generate a response that is inaccurate, out-of-date, or biased in other cases.

There are some ways to evaluate the quality of the responses generated by a language model, such as human evaluation, where a human annotator assess the quality of the response, or automatic evaluation by comparing the generated response against a reference set. However, these methods are not always straightforward, and the results may vary depending on the specific task and data.

Additionally, it‘s important to remember that the output of a language model should always be critically evaluated, and the model should not be considered as a sole source of information or knowledge.

Could ChatGPT ever replace Google as the main way people search for information? permalink

It is unlikely that ChatGPT or any other single language model could replace Google as the main way people search for information. Google Search is a complex system that uses a variety of techniques, such as web crawling, indexing, and ranking, to provide relevant and accurate search results to users.

ChatGPT and other language models like it are primarily focused on understanding and generating text, rather than searching and indexing the web. While they can be used to improve the quality of search results by better understanding the intent behind user queries, they are not currently capable of indexing and ranking the entire web.

Additionally, Google search not only returns the most relevant results but also provides a vast array of features like autocomplete, spell check, voice search and many more.

It‘s worth noting that OpenAI, the company behind ChatGPT, has developed a search engine called DALL-E that can search and generate images, but it‘s still in its early stages and not available for general use yet.1

That being said, ChatGPT and other language models like it can be used to improve the natural language understanding capabilities of search engines and other applications, making them more accurate and user-friendly. But it‘s unlikely that it will replace Google or other search engines as the main way people search for information in the near future.

Do you expect ChatGPT will put a lot of white[-]collar workers out of work? permalink

ChatGPT and other large language models like it have the potential to automate certain tasks that were previously done by white-collar workers, such as content generation, data entry, and customer service. However, it‘s important to note that the impact of these models on employment will depend on a variety of factors, including the specific tasks and industries in question, as well as the rate at which the technology improves and becomes more widely adopted.

It‘s likely that the use of ChatGPT and other large language models will result in some job displacement, particularly in areas such as data entry, customer service and content creation. However, it‘s also possible that the use of these models will lead to the creation of new types of jobs and industries, such as those related to building, training, and maintaining the models.

It‘s also worth noting that the use of language models like ChatGPT can also bring benefits to businesses and society as a whole by increasing efficiency and productivity, reducing costs, and improving the quality of products and services.

It‘s important to keep in mind that the impact of ChatGPT and other language models on employment is a complex issue that depends on many factors. It‘s also important to consider the potential benefits and drawbacks of the technology, and to actively work on the development of policies and programs that can mitigate the negative effects and maximize the positive effects of the technology on employment and society.

It is possible, on an infinite timescale, that ChatGPT or another AI could actually become sentient? permalink

The question of whether an AI could become sentient is a topic of ongoing debate and research in the field of artificial intelligence. Sentience is the capacity to have subjective experiences and conscious awareness, and it‘s a difficult concept to define and measure.

Currently, the AI systems like ChatGPT and other large language models are based on machine learning techniques that are used to process and analyze large amounts of data, but they don‘t have the ability to feel emotions, consciousness, self-awareness or experience the subjective aspects of the world.

It‘s important to note that the current AI systems are not sentient, they are based on mathematical models and algorithms, and their behavior is determined by the parameters and data that is used during training. There‘s no evidence that current AI systems have the ability to become sentient, and it‘s not clear if it would ever be possible for an AI to become sentient or not.

That being said, the field of AI is rapidly evolving, and new developments in the field of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) may bring new possibilities. However, the question of whether an AI could become sentient is still a topic of debate and it‘s not something that is likely to happen in the near future.

That‘s somewhat of a relief. permalink

I understand that the idea of an AI becoming sentient can be a concern for some people. It‘s important to remember that the current state of AI technology is still quite limited, and the capabilities of AI systems like ChatGPT are still far from matching human intelligence.

It‘s also worth noting that the development of AI is guided by a set of ethical principles, such as transparency, accountability and explainability, which aim to ensure that the technology is used for the benefit of humanity and that its development is guided by ethical considerations.

It‘s important to continue to have open and honest conversations about the potential implications of AI and to actively work on the development of policies and programs that can mitigate the negative effects and maximize the positive effects of the technology on society.

Thank you for answering my questions. permalink

You‘re welcome! I‘m glad I could help. If you have any other questions, feel free to ask.


  1. ChatGPT has limited knowledge of events after 2021. DALL-E was made widely available towards the end of 2022. ↩︎