Pathfinders Newmoonsletter, September 2024
While Big Tech refuses to embrace the demure and mindful trend, we sing to a burning planet, explore some recent research on LLMs, and wonder whether AI chatbots could help us get our sh*t together.
As the moon completes another orbit around Earth, the Pathfinders Newmoonsletter rises in your inbox to inspire collective pathfinding towards better tech futures.
We sync our monthly reflections to the lunar cycle as a reminder of our place in the Universe and a commonality we share across timezones and places we inhabit. New moon nights are dark and hence the perfect time to gaze into the stars and set new intentions.
With this Newmoonsletter, crafted around the Tethix campfire, we invite you to join other Pathfinders as we reflect on celestial movements in tech in the previous lunar cycle, water our ETHOS Gardens, and plant seeds of intentions for the new cycle that begins today.
Tethix Weather Report
đ„ Current conditions: Even corn is sweating hard in this scorching AI summer as fire practitioners keep burning money, people, and the planet to keep their golem kilns running
This AI summer is getting so hot that even corn is sweating heavily in the US Midwest, where fire practitioners just happen to be building new data centers with lower cooling bills. Now, the multiplying data centers that are used as kilns for AI golems are not the only thing thatâs making corn sweat on our warming planet. But the rising emissions the fire practitioners are trying to carbon-account away are sure to keep increasing cooling costs, regardless of where they try to tax-evade next. So unless the plan is to harvest corn sweat for data center cooling, it might be time for the fire practitioners to cool it a bit with their world domination plans. (See: âCorn Sweatâ and Climate Change Bring Sweltering Weather to the Midwest, The Move to Improve: Why the Midwest is Housing More Data Centers, and How Big Tech is quietly trying to reshape how pollution is reported)
After all, a recent ruling has found Googleâs fire practitioners to be already winning the game of Monopoly theyâre playing with our search queries. Strangely enough, the new apple-flavored AI golems that will be unleashed onto our made-in-the-global-South devices, might soon be answering more questions that might otherwise be âgoogledâ. Thus, helping Googleâs fire practitioners argue they are but a humble tech startup trying to earn an honest dime with ads nobody wants to click on. (See: Google's 'monopoly' ruling may adversely affect Apple. Will Apple Intelligence come to its rescue?)
Speaking of power-hungry tech mages pretending to be but humble apprentices interested in benefiting rather than exploiting humanity. Powerful fire practitioners have discovered a new shade of morally gray by realizing they can acquire potential competitors without actually acquiring them, thus avoiding regulatory scrutiny. So, when we say that ChatGPT appears to have a stronger moral compass than its makers, we want to remind everyone that the bar is getting lower every day. (See: The New A.I. Deal: Buy Everything but the Company)
But we get it. Itâs hard trying to convince your shareholders that youâll continue growing forever, while at the same time convincing regulators that youâre nothing more than an innovative tech startup thatâs creating jobs for AI golems humans. And to also keep up with the incessant demands for growth while youâre burning money on golem kilns and fees, you sometimes have to sacrifice a couple of thousands of humans to appease Moloch shareholders. (See: Sweden's Klarna says AI chatbots help shrink headcount)
We cannot help but wonder: who will be paying for all these magic AI golems as the well-paid tech workforce shrinks to the bare minimum, and fewer people can afford multiple, increasingly expensive subscription services? We suppose that is a concern for another quarter. For now, Moloch shareholders have been appeased, and the data centers have not yet been flooded or incinerated in wildfires.
And hey, perhaps weâll all finally get the increase in abundance, leisure time, and more just distribution of wealth that tech-enabled progress was supposed to enable. And enough disposable income to afford several subscription-based AI companions, on top of groceries! As of now, the 15-hour workweek Keynes predicted back in the 1930s still seems like science fiction and the details of how our obsession with growth is going to keep us within planetary boundaries remain vague. Ah, wait. We almost forgot. AGI. Miracle energy source. AI fixes climate. Immortality. Got it. (See: Ray Kurzweil: Technology will let us fully realize our humanity, What should we do with the time that new technologies save?, and The tipping points of climate change â and where we stand)
You see how the fire practitioners of Silicon Valley are willing to bet everything and everyone for a chance of winning the AI race? Not very demure, not very mindful. But as they continue releasing their AI golems into the devices and apps we rely on, we wonder whether their golems can be enlisted to help us make more mindful decisions. In a special bonus episode of Pathfinders Podcast, we interview ChatGPT to explore how these AI golems could help humanity rebuild communication bridges, and think different.
And we hope that the seeds we have collected for this Newmoonsletter can help you explore perspectives that might just lead to paths where we are able to preserve a liveable planet for our children. A human can still dream, right? (At least until fire practitioners force us to upload our consciousness into the cloud.)
Tethix Elemental seeds
Fire seeds to stoke your Practice
As generative AI approaches the peak of inflated expectation, many people still fall on the extremes ends of the spectrum of responses to generative AI.
On the freezing end that doesnât want to touch gen AI with a stick, people are still dismissing the potential of Large Language Models (LLMs) due to their limitations and biases. If you fall on this end, you might want to explore todayâs gen AI product landscape with a bit more curiosity, even if it is just to provide better criticism.
On the other, wildfire end of the spectrum, full of unbridled enthusiasm, people see LLMs as the only tool theyâll ever need and try to hammer down every (in)possible task with them. If you fall on this end, it might help to get a broader sense of what it takes to develop an LLM application, and this three-stage framework by Ben Dickson is a good starting point.
If youâre like us, you probably fall somewhere in between and realize that the choice of whether to use LLMs should ideally be based on more than just a financial spreadsheet and benchmark performance. The development of LLMs is filled with ethical dilemmas of all shapes and sizes. As a recent Vox article explores, it might actually be impossible to run a big AI company ethically given how Anthropic â which was supposed to be an ethical haven for OpenAI refugees â has been acting lately.
At least Anthropic is a bit more open than OpenAI when it comes to sharing system prompts that define Claudeâs personality and style. Along with uncovered system prompts for Appleâs upcoming AI assistant, LLMs' system prompts provide a fascinating look inside the âmindsâ of LLMs and how they might be inclined to interact if left to their own devices.
That said, OpenAI did recently release a GPT-4o System Card that reports on how catastrophically risky their latest publicly available model is. We appreciate the transparency, but itâs worth noting that GPT-4oâs persuasive capabilities have crossed into what OpenAI defines as medium risk threshold. Even OpenAI openly acknowledges that the anthropomorphization and emotional reliance on models such as GPT-4o is a societal impact worthy of further scrutiny. So even if you think LLMs are nothing more than stochastic parrots, we think we should all be a bit more curious about how LLMs make humans feel as they become increasingly convincing imitators of human interactions.
And while we appreciate these glimpses into how the most popular LLMs work and affect their users, weâre still holding our breath when it comes to environmental transparency. Weâre still waiting for the day when OpenAI, Anthropic, Google, Microsoft, Meta & co. decide to share the environmental costs of their large models, instead of trying to rewrite the rules on net zero. (If youâre unsure about what net zero even means, we highly recommend checking out the Climate commitments section in the free Green Software Practitioner course.)
At least we can now look forward to the day when regulations such as the EU AI Act will force these providers of general-purpose AI models to disclose the energy consumption of their models. In fact, the European Commission is currently looking for proposals on how to best measure and report these emissions. Weâre definitely with Sasha Luccioni and others who wonder: Light bulbs have energy ratings â so why canât AI chatbots?
And while energy consumption is just a part of the environmental impact puzzle, we have to start somewhere. Given the current need for decisive climate action, knowing the planetary cost of generative AI might help us rethink how we deploy and use these technologies.
Air seeds to improve the flow of Collaboration
But letâs be honest: while having more data and transparency would be welcome, we should also remind ourselves that we already know what needs to be done to ease our pressure on planetary boundaries and hopefully avoid the worst-case climate scenarios. We just canât seem to agree to act on the science and implement existing ideas into practice at the scale thatâs required. Given that LLMs can access vast amounts of this knowledge and also seem to be pretty good societal mirrors, could they help us, humans, collaborate and thus act better?
Various AI assistants and companions are already starting to play a bigger role in our lives, prompting some to rebrand AI as âaddictive intelligenceâ, especially when it comes to kids and younger adults. AI emotional lock-in is certainly something to watch out for, especially as companies look for new monetization opportunities while at the same time making AI chatbots more human-like.
In our podcast, we managed to have a pretty decent group voice chat with ChatGPT, and it appears that the upcoming Advanced Voice Mode will further improve this experience. As already mentioned, even OpenAI decided to highlight the potential for emotional reliance in their recently released System Card â especially when it comes to ChatGPTâs Voice Mode â, which might also be used to nudge people towards action.
As our interactions with AI chatbots become more persuasive, we should also keep in mind the human tendency to perceive machines as impartial â which might make AI chatbots trustworthy mediators in human affairs. Recent research seems to indicate that AI assistants do have a strong influence on team discussions. And another group of researchers is exploring how well-timed AI interventions might help improve alignment within teams and help humans follow critical procedures. And while the US mayoral candidate who promised to govern with the assistance of a fine-tuned LLMs was not elected, people are already collaborating with AI chatbots in different contexts, both in 1-to-1 and group settings.
So perhaps AI chatbots could actually help us rebuild communication bridges in our increasingly polarized world, especially online, where we should also explore more democratic governance approaches. We do hope that AI chatbots will not be used just to attend meetings in our place and quietly summarize âkey takeawaysâ â an area where it seems we already need to develop a clear etiquette. Instead, AI chatbots could help us find common ground, encourage equal participation, and perhaps even remind us about the ethical and sustainability commitments our companies made on their websites.
Earth seeds to ground you in Research
Of course, even ChatGPT kept reminding us about its biases and limitations during our podcast interview. While we do have glimpses of the system prompts that define the behavior of some of the most popular AI chatbots, we still donât know what exactly is going on in LLMsâ âmindsâ.
Recent research is showing us that LLMs can already appear more human than humans, that they excel at inductive reasoning but struggle with deductive tasks, and that they might develop their own understanding of reality as their language abilities improve.
A new METR general capability evaluation has also found that âthe GPT-4o agent completes around 60% of the tasks that our human baseliners completed in less than 15 minutes, but can complete around 10% of tasks with human baselines that took over 4 hours.â Using LLM-based agents on tasks they are able to complete is also unsurprisingly cheaper: â(âŠ) the average cost of using an LM agent is around 1/30th of the cost of the median hourly wage of a US bachelorâs degree holder.â Weâd like to remind anyone seeing these results and thinking they can fire their expensive human workers that the devil with all machine learning is still in well-defined tasks.
And obviously, LLMs and other AI systems pose additional serious risks besides being faster and cheaper than humans. To help you keep track of how things can go mildly to horribly wrong, MIT has released a database of AI risks. Their AI Risk Repository âcaptures 700+ risks extracted from 43 existing frameworksâ. And if that isnât enough for you, you can also head to the AI Impacts Wiki to explore burning questions about the good, the bad, and the ugly side of AI.
Meanwhile, we are wondering when weâll reference the first research paper autonomously written and published by an AI Scientist. We also wonder how the AI researchers will deal with the dreaded human Reviewer #2? Perhaps all peer reviewers will become AI-powered and thus more mindful with the feedback they provide to their peers? The race for the highest impact score is sure to get interesting as more AI researchers enter the academic publishing arena, either openly or in the shadows.
At least both human and AI researchers can now reference a new draft definition of what open-source AI is â and what it isnât. And, â surprise, surprise! â, Mark Zuckerbergâs Metaâs Llama 3 does not meet the OSI defined criteria due to its license and usage restrictions, despite Zuckâs attempts to rebrand himself as an open-source AI evangelist. And while the OSI definition doesnât require the release of raw training data, the model provider should include enough information about the data that was used, and the code that was used for training.
Will the new definition lead to less open-source-washing? Probably not. But we should celebrate whenever a group of highly opinionated humans manage to agree on something! (With or without AI assistance.)
Water seeds to deepen your Reflection
Now, take a deep breath. If youâre dealing with a bit of anxiety after reading this Newmoonsletter â or any other news, really â weâd like to reassure you that this might be a perfectly healthy reaction to the lunacy of tech and modernity more broadly. Listening to Rachel Donaldâs discussion with Steffi Bednarek, a climate psychotherapist, on the latest episode of the Planet: Critical podcast might help you feel less alone.
And if youâre looking for a bit of fiction to escape to, while still exploring how we might face our current challenges, we suggest checking out Any Human Power by Manda Scott. (Her podcast Accidental Gods is also a good source of inspiring ideas and people.)
But when you truly need a break from it all, put on Earth.fm and listen to nature sounds from different corners of our wonderful planet. And get out there when itâs not too hot, place your hands on a tree, and express some gratitude to the more than human life that has made all of this possible
A music seed to sing & dance along
Michael Jackson released Earth Song in 1995, when the average global temperature was about 0.5 °C above the pre-industrial average. Weâre now at around 1.2 °C and already starting to breach the Paris-agreed climate threshold of 1.5 °C. Globally, weâre still burning about 1.6 times more fossil fuels than in 1995 â every year!
Keep in mind that this song was released before Google launched, before we had smartphones in our pockets, before cloud computing, and before the internet had enough data to train an LLM. Yet, the songâs message and MJâs cries still resonate even more loudly today, almost 30 years later.
Did you ever stop to notice
All the blood we've shed before?
Did you ever stop to notice
This crying Earth, this weeping shore?âŠ
What about nature's worth? (Ooh)
It's our planet's womb (What about us?)
What about animals? (What about it?)
Turned kingdom to dust (What about us?)âŠ
Pathfinders Podcast
If youâd like to keep exploring the lunacy of tech with us, we invite you to listen and subscribe to the Pathfinders Podcast wherever you get your podcasts. The podcast is a meandering exploration inspired by the seeds planted in the Newmoonsletter at the beginning of the lunation cycle, and the paths illuminated during the Full Moon Gathering.
The question that emerged in the August Newmoonsletter and guided our discussion was: Should we use AI chatbots as mediators in human affairs? This episode was inspired by our observations that ChatGPT seems to have a stronger moral compass than its makers. When asked about the ethics of questionable business decisions such as using peopleâs voices without their consent, ChatGPT presents diverse considerations from different points of view and advocates for upholding ethical standards. This made us wonder: would executives like Sam Altman make more ethical decisions if they were using their creations in day-to-day moral deliberations? And even more broadly, could we use Large Language Models (LLMs) such as ChatGPT to help us communicate better, resolve interpersonal conflicts and tensions, and perhaps even make better collective decisions?
And because it didnât feel right to keep talking about ChatGPT without getting its perspective, we recorded a special bonus episode with ChatGPT as our guest of honor. In the follow-up episode A chat with ChatGPT on AI chatbots as mediators, we play with the limits of LLMs and ChatGPTâs Voice Mode to explore human and AI biases, and the potential benefits of AI-supported mediation. We try to imagine how collaborative AI tools might help humans communicate better, how organizations like OpenAI might develop these tools more responsibly by experimenting with different governance models, and other considerations that Kai helps us surface. In the second half of the episode, we provide additional insights into the how and why of the episode and our hopes to inspire curiosity and playfulness in the ways we explore the potential of AI chatbots.
To capture the full awkwardness and curiosity of chatting with an AI guest, we also recorded the bonus episode on video. You can head over to YouTube to see what happens when two humans and a robot walk into a bar record a podcast. Both episodes are also available in audio-only format wherever you get your podcasts.
Take a listen and join us at the next Full Moon Gathering if youâd like to illuminate additional paths for our next episode!
Your turn, Pathfinders.
Join us for the Pathfinders Full Moon Gathering
In this lunation cycle, weâre inviting Pathfinders to gather around our virtual campfire to explore the question: How do we embrace the lunacy of tech with playfulness? â but itâs quite likely that our discussion will take other meandering turns as well.
So, pack your curiosity, moral imagination, and smiles, and join us around the virtual campfire for our next đ Pathfinders Full Moon Gathering on Wednesday, September 18 at 5PM AEST / 9AM CEST, when the moon will once again be illuminated by the sun.
This is a free and casual open discussion, but please be sure to sign up so that we can lug an appropriate number of logs around the virtual campfire. And yes, friends who donât have the attention span for the Newmoonsletter are also welcome, as long as they reserve their seat on the logs.
Keep on finding paths on your own
If you canât make it to our Full Moon Pathfinding session, we still invite you to make your own! If anything emerges while reading this Newmoonsletter, write it down. You can keep these reflections for yourself or share them with others. If it feels right, find the Reply button â or comment on this post â and share your reflections with us. Weâd love to feature Pathfinders reflections in upcoming Newmoonsletters and explore even more diverse perspectives.
And if youâve enjoyed this Newmoonsletter or perhaps even cracked a smile, weâd appreciate it if you shared it with your friends and colleagues.
The next Newmoonsletter will rise again during the next new moon. Until then, build some communication bridges, get to know the trees in your neighborhood, and be mindful about the seeds of intention you plant and the stories you tell. Thereâs magic in both.
With đ from the Tethix campfire,
Alja and Mat