Gurteen Knowledge Letter
Issue 288 – June 2024
For over 20 years, I have helped numerous individuals initiate Knowledge Cafés within their organizations. If you’re interested in learning how to get started, take a look here. Feel free to email me for further advice; I would be happy to help you.
Contents
- AI and the Art of Remixing
The postproduction paradigm - How We Get to Knowledge
Not through rational methods alone - Are Large Language Models Sentient?
Sentience is the capacity to experience sensations, thoughts, and feelings - How Could You or I Be Wrong About This?
A crucial question to ask in a conversation - Chatgpt, Explained
Is ChatGPT going to take over the world? - The Real Loss in Losing an Argument
The opportunity to engage in a constructive, enlightening conversation - On Conversation by Ben Franklin
The Pennsylvania Gazette, October 15, 1730 - Humans + AI Learning Community!
Keeping up with AI developments - Please support my work
- Unsubscribe
- Gurteen Knowledge Letter
AI and the Art of Remixing
The postproduction paradigm
AI language models are revolutionizing creative expression by mirroring the remixing techniques of postproduction art. This new capability challenges traditional notions of original authorship, allowing creators to curate, remix, and collaborate with existing cultural materials in innovative ways.
I have written about this concept in my blook, and Donald Clark has also discussed it in his blog and LinkedIn, renaming it "postcreation" and writing:
Postcreation: a new world. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue.
Credit: Donald Clark
This perspective offers an exciting glimpse into the impact of LLMs and Generative AI. These technologies hold immense potential with far-reaching consequences. Over the long term, their influence will be transformative across various fields.
How We Get to Knowledge
Not through rational methods alone
I love this quote by David Weinberger because it accurately describes how we acquire knowledge—not just through rational methods but also via curiosity, social bonds, intuition, and even mistakes. His crucial insight that "knowledge is not determined by information" is spot-on. In complex situations, having data isn't enough; it's deciding which information matters that truly defines knowledge.
We get to knowledge — especially "actionable" knowledge — by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles.
Most important in this regard, where the decisions are tough and knowledge is hard to come by, knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used.
Are Large Language Models Sentient?
Sentience is the capacity to experience sensations, thoughts, and feelings
Ross Dawson started a recent interesting thread on LinkedIn on whether Large Language Models are or will be sentient. The thread was prompted by an article by Stanford luminary Fei-Fei Li, who argues that AI isn't sentient because it lacks subjective experiences.
Sentience is the capacity to experience sensations, thoughts, and feelings, and to have subjective experiences. It involves being aware of one’s surroundings, having a sense of self, and possessing the ability to perceive and respond to stimuli. Sentient beings are considered to have a level of consciousness, and their experiences are subjective and unique to them.
Tag: sentience (2)
This provoked me to chat with ChatGPT on my iPhone, and I'm 98% convinced that current large language models (LLMs) are not sentient. At the end of my conversation, I asked ChatGPT to provide three references to articles claiming that LLMs are sentient. None of the articles stated this claim definitively, but the article below systematically asks: Could AI be sentient with Large Language Models?
The bottom line of the article: "... we can't decisively confirm or deny the sentience of current LLMs, and that "finding a conclusion counterintuitive or repugnant is not sufficient reason to reject the conclusion". Thus, we should at least take the hypothesis seriously and the prospect of AI sentience even more seriously."
Side note: These days, I assign a percentage to my beliefs and challenge others to persuade me to adjust that percentage. There is nothing I am 100% sure about.
How Could You or I Be Wrong About This?
A crucial question to ask in a conversation
Intellectual humility and open-mindedness are vital for constructive dialogue. Polarization and confirmation bias hinder productive conversations on complex issues. Asking How could you or I be wrong about this? promotes self-reflection, critical thinking, and openness to alternative perspectives.
Chatgpt, Explained
Is ChatGPT going to take over the world?
If you feel you do not understand the basics of ChatGPT, this introduction, ChatGPT Explained, may be of help.
The Real Loss in Losing an Argument
The opportunity to engage in a constructive, enlightening conversation
When you lose an argument because your opponent resorts to dishonest argumentative strategies, such as personal attacks and attempts to make the exchange emotional, the actual loss isn't losing the argument itself. The real tragedy lies in the missed opportunity for a meaningful, thought-provoking dialogue that could have enriched both of you.
It's not about winning or losing; it's about the exchange of ideas and perspectives. When an argument devolves into a rhetorical battle rather than a respectful conversation, both sides miss out on the chance to challenge their own beliefs, consider alternative viewpoints, and potentially reach a deeper understanding of the issue.
Ultimately, it's not about one person losing; it's about both individuals losing the opportunity to engage in a constructive, enlightening conversation.
On Conversation by Ben Franklin
The Pennsylvania Gazette, October 15, 1730
In "On Conversation," an essay published in The Pennsylvania Gazette on October 15, 1730, Ben Franklin addressed the issue of people lacking the skills to engage effectively in conversation. Interestingly, Franklin's advice focused more on being pleasant and likable rather than on cultivating deep, thought-provoking discussions.
I have written more about Franklin's essay in my blook.
Humans + AI Learning Community!
Keeping up with AI developments
If you are interested in AI and wish to keep up with developments, you may like to join the Humans + AI Learning Community!, created by Ross Dawson. It is an online community, free to join and has over 200 members.
Please help support my work.
I have been publishing the Gurteen Knowledge Letter every month for over 20 years, and most of you have received it for five years or more. My Knowledge Café also recently had its 20thth birthday in September 2022.
If you find my work valuable, please consider supporting me by donating $1 (or more) a month to become a Patron or making a small one-off contribution. Your assistance will help cover some of my website hosting expenses.
I have over 50 patrons so far. Thank you all.
Unsubscribe
If you no longer wish to receive this newsletter, please reply to this email with "no newsletter" in the subject line.
Gurteen Knowledge Letter
The Gurteen Knowledge Letter is a free monthly e-mail-based newsletter. Its purpose is to stimulate thought about Conversational Leadership and Knowledge Management. You can find back issues here.
If you don't already receive this newsletter, you can register to receive it by email each month.
It is sponsored by the Henley Forum of the Henley Business School, Oxfordshire, England.
You may copy, reprint or forward all or part of this newsletter to friends, colleagues, or customers, so long as any use is not for resale or profit and I am attributed. If you have any queries, please get in touch with me.
David GURTEEN
Gurteen Knowledge
Fleet, United Kingdom