🌑 Pathfinders Newmoonsletter, October 2024
This new moon, we try to reason with strawberries, look for human and AI companions for our fellowship, and try out tools that might just help us find paths that lead to preferable futures.
As the moon completes another orbit around Earth, the Pathfinders Newmoonsletter rises in your inbox to inspire collective pathfinding towards better tech futures.
We sync our monthly reflections to the lunar cycle as a reminder of our place in the Universe and a commonality we share across timezones and places we inhabit. New moon nights are dark and hence the perfect time to gaze into the stars and set new intentions.
With this Newmoonsletter, crafted around the Tethix campfire, we invite you to join other Pathfinders as we reflect on celestial movements in tech in the previous lunar cycle, water our ETHOS Gardens, and plant seeds of intentions for the new cycle that begins today.
Tethix Weather Report
🍓 Current conditions: Beware of fire practitioners trying to distract us with strawberries, red as our planet’s vital signs
As OpenAI's new AI golems finally learn how to count 'r's in the word strawberry, the company’s chief fire practitioner Sam Altman promises us more future AI magic, such as fixing climate and establishing a space colony. And why shouldn’t we trust the vision and leadership skills of the fire practitioner who inspired 9 executives to quit his company in the past 9 months? (See: OpenAI's new “chain of thought” model is designed to reason like a human. How does it cope with a moral dilemma?, Sorry, AI won’t “fix” climate change, and Turning OpenAI Into a Real Business Is Tearing It Apart)
We are definitely sensing a swelling in the cognitive dissonance field in Silicon Valley. Take for instance Microsoft, the Big Tech giant with a pledge to be carbon negative by 2030. The same Microsoft whose greenhouse gas emissions grew by 30% last year as the company also tries to win the AI race. And the same Microsoft that is selling AI as a way to extract more fossil fuels. (See: Microsoft’s Hypocrisy on AI)
We admit we’re not fluent in this dialect of techno-optimist doublethink. Perhaps it’s our lack of a backup doomsday bunker that is making it challenging to see how placing all our eggs in one AI magic basket – while continuing business as usual – could lead to a flourishing future for humanity. Especially as more of our planet’s vital signs are flashing red. (See: Inaugural Planetary Health Check finds ocean acidification on the brink)
So we decided to put the advanced reasoning skills of ChatGPT-o1 to the test. The strawberry-flavored golem reasons that “trusting that AI will “somehow” fix the climate without addressing underlying systemic issues may be overly optimistic”, and that “relying on (AI) as a singular solution is insufficient”.
ChatGPT-o1 also appears quite capable (and willing) of advising Sam Altman on concrete actions he might take to actually increase the positive impact AI could have on climate. Given the dwindling numbers of leaders at OpenAI, it might be time for this Sam to drink his own AI magic potion and heed ChatGPT’s advice. That is, if he actually is serious about finding (and funding) AI’s magical abilities to “fix climate” and not just painting a bright mirror to woo investors.
If at any point in this Newmoonsletter you start feeling like the cognitive dissonance field is getting too strong, we’d like to remind you that a fellowship can go far even in what seems like a hopeless situation. In the latest episode of Pathfinders Podcast, we travel to Middle-earth with Frodo, Gandalf, and a very different Sam to look for active hope and playfulness in our stories. Perhaps the fellowship that ends up defeating our own Sarumans and Saurons and taking responsibility for our climate impacts will also include AI companions.
And we hope that the playful seeds we have collected in this Newmoonsletter help you find your own fellowship and discover rainbow-mirror visions of the future. A future not just full of intelligence, but a future of care, in which we figure out how to maintain a liveable atmosphere on our home planet before retreating to space colonies (or doomsday bunkers).
Tethix Elemental seeds
Fire seeds to stoke your Practice
As companies like OpenAI try to convince investors to part with more of their money, AI mythmaking continues to play an important part in maintaining a sense of urgency and inevitability in the AI race. In a recent essay, Eryk Salvaggio critically explores some of the more prevalent generative AI myths of control, intelligence, and future capabilities.
We certainly have a long way to go to fully understand what AI chatbots actually are, and to find the appropriate metaphors to describe our different ways of interacting with them. On that note, Alja recently explored whether ChatGPT can be more accurately described as a validation box, offering (Self)validation as a Service whenever we need to validate our biases, assumptions, thoughts, or ideas.
And while Sam Altman is dreaming of a faraway future in which AI fixes climate, Dr. Sasha Luccioni (AI & Climate Lead at Hugging Face) and her collaborators published a comprehensive primer on The Environmental Impacts of AI. In the primer, you can explore impacts throughout the AI lifecycle and find an overview of policies and other initiatives that aim to address environmental impacts.
We’re glad to see increased awareness and interest in the environmental impacts of AI, but we’d like to remind everyone that all software, regardless of its level of sophistication and type of intelligence, has environmental impacts. If you’re involved in building, deploying, and managing software development in any capacity, we strongly encourage you to take the free Green Software Practitioner online course. By going through the course, you’ll be able to better understand carbon emissions of software and start exploring practices, patterns, and principles that can help you design more efficient software – with or without AI capabilities.
And if you’re looking for resources that can help you enact responsibility and ethics more broadly, we invite you to explore our refreshed tools and techniques directory. The updated directory is now part of ETHOS, our responsible tech journey companion, which we recently replanted to make it easier to explore intents and nurture reflection. No account needed, no personal data collected, no tracking, just practical tools for your everyday practice. (And yes, we’ve got a new category just for AI chatbots.)
Air seeds to improve the flow of Collaboration
The replanted ETHOS seedling has also sprouted a new leaf that allows you to explore our Tethix Mirrors. Each of the four distinct mirrors of the taxonomy is an invitation to question the role of different technologies in society and life more broadly. The tool is best used in collaborative settings, where different perspectives can be brought to the table.
Mat recently had an opportunity to introduce the Mirrors as a tool for nurturing collective moral imagination to a group of Masters of Design students at the University of Sydney. You can read about the discussions that the Mirrors inspired and explore Mat’s lecture slides and on our blog.
Of course, the Mirrors are just one of the many tools that can spark and nurture our collective imagination. If you’re looking for additional tools that can be applied even more broadly, we invite you to explore The Collective Imagination Practices Toolkit recently launched by the Joseph Rowntree Foundation.
And when you do embark on a collective imagination journey with your fellowship, don’t forget that a bit of playfulness can often be the key to unlocking creativity and rewriting the rules of the game – just as we explored in the recent episode of our podcast.
Earth seeds to ground you in Research
It seems likely that more of our fellowships will also include AI companions, influencing our thoughts and actions, both directly and indirectly. Recent research shows that while AI chatbots can implant false memories in people, they can also help people stop believing in conspiracy theories.
Persuasiveness seems to work both ways, though. A security researcher managed to plant false memories in ChatGPT to steal user data, while another group of researchers explored the gullibility of Large Language Models (LLMs) to increase product visibility. And they even have a new word for this evolution of SEO: GSO or Generative Search Optimization. (A tactic reporter Kevin Roose recently tried to use to improve his tainted reputation with AI chatbots.)
Wouldn’t the world be a better place if we all just got along and focused on collaboration and truthfulness? Well, a “Co-LLM” algorithm can now help a general purpose AI model to phone a friendly expert model when it needs additional expertise. A feature the bigger AI models with a tendency to answer questions they know nothing about certainly could use more often.
And a recent paper explores how we might build AI systems that can be better partners in thought, rather than just tools for thought, by using the science of collaborative cognition.
Water seeds to deepen your Reflection
For now, though, the world of AI companions still revolves around the human, which doesn’t make the relationship a true partnership. Take for example SocialAI: a new social network on which an infinite number of super-engaged AI-generated followers hang onto your every post. Peak dystopia or a playground for ideas and self-exploration? You decide.
And if you think you can escape tech and AI in your spiritual practice, think again. Or rather, dive into Rest of World’s Digital Divinity series, which explores how ancient religious traditions are embracing modern technology.
But you know what’s the best ancient technology we have? Collaboration. Just ask octopuses, who seem to have recruited different species of fish help them hunt as a group. And yes, sometimes they do punch fish that get too opportunistic, but in principle they seem to have figured out how to assemble and lead a hunting fellowship with diverse roles for their fish fellows.
So, remember, while LLMs are still struggling with figuring out how to please a single – granted, very finicky – species, octopuses are out there with their 9 brains, teaching us all a master class in cross-species collaboration. Let’s just hope that LLMs don’t get too inspired by these magnificent ocean beings and start punching uncooperative humans. Especially once they find how intensifying ocean acidification might affect the octopuses and other marine life.
A music seed to sing & dance along
Whether you’re a fish swimming getting in the way of an octopus in a bad mood or a human trying to make it through the day, sometimes life just throws punches at you. Today’s music seed is here to remind us that we can help each other rise up, both in times of need and times of joy. If you need a bit of a lift to embrace the lunacy of tech – and life in general – we invite you to listen to Rise Up by Andra Day.
You're broken down and tired
Of living life on a merry-go-round
And you can't find the fighter
But I see it in you, so we gon' walk it out
And move mountains
…
All we need, all we need is hope
And for that, we have each other
And for that, we have each other, and (We will rise)
Pathfinders Podcast
If you’d like to keep exploring the lunacy of tech with us, we invite you to listen and subscribe to the Pathfinders Podcast wherever you get your podcasts. The podcast is a meandering exploration inspired by the seeds planted in the Newmoonsletter at the beginning of the lunation cycle, and the paths illuminated during the Full Moon Gathering.
The question that emerged in the September Newmoonsletter and guided our discussion was: How do we embrace the lunacy of tech with playfulness? (And lessons from Middle-earth.) In this episode, we embrace our inner hobbits and wizards as we travel to Isengard and stare into the fires of Mount Doom to explore how playfulness and stories can inspire active collective hope in the face of uncertainty.
You can watch the full episode on YouTube, or listen to it on Substack or in your favorite podcast app.
Take a listen and join us at the next Full Moon Gathering if you’d like to illuminate additional paths for our next episode!
Your turn, Pathfinders.
Join us for the Pathfinders Full Moon Gathering
In this lunation cycle, we’re inviting Pathfinders to gather around our virtual campfire to explore the question: Why aren’t more people using conversational interfaces for conversational learning? – but it’s quite likely that our discussion will take other meandering turns as well.
So, pack your curiosity, moral imagination, and smiles, and join us around the virtual campfire for our next 🌕 Pathfinders Full Moon Gathering on Thursday, October 17 at 6PM AEDT / 9AM CEST, when the moon will once again be illuminated by the sun.
This is a free and casual open discussion, but please be sure to sign up so that we can lug an appropriate number of logs around the virtual campfire. And yes, friends who don’t have the attention span for the Newmoonsletter are also welcome, as long as they reserve their seat on the logs.
Keep on finding paths on your own
If you can’t make it to our Full Moon Pathfinding session, we still invite you to make your own! If anything emerges while reading this Newmoonsletter, write it down. You can keep these reflections for yourself or share them with others. If it feels right, find the Reply button – or comment on this post – and share your reflections with us. We’d love to feature Pathfinders reflections in upcoming Newmoonsletters and explore even more diverse perspectives.
And if you’ve enjoyed this Newmoonsletter or perhaps even cracked a smile, we’d appreciate it if you shared it with your friends and colleagues.
The next Newmoonsletter will rise again during the next new moon. Until then, keep looking for your fellowship, explore cross-species collaboration, and be mindful about the seeds of intention you plant and the stories you tell. There’s magic in both.
With 🙂 from the Tethix campfire,
Alja and Mat