Pathfinders Newmoonsletter, August 2024
We coddiwomple towards a harmonious future in which AIs & humans form a mutually beneficial creative alliance by exploring the context in which we all live, and wondering who to place our faith in.
As the moon completes another orbit around Earth, the Pathfinders Newmoonsletter rises in your inbox to inspire collective pathfinding towards better tech futures.
We sync our monthly reflections to the lunar cycle as a reminder of our place in the Universe and a commonality we share across timezones and places we inhabit. New moon nights are dark and hence the perfect time to gaze into the stars and set new intentions.
With this Newmoonsletter, crafted around the Tethix campfire, we invite you to join other Pathfinders as we reflect on celestial movements in tech in the previous lunar cycle, water our ETHOS Gardens, and plant seeds of intentions for the new cycle that begins today.
Tethix Weather Report
🌴 Current conditions: Seeking shade from the scorching AI summer underneath a coconut tree
The fire practitioners of Silicon Valley probably released a sigh of relief when it turned out that it was neither a cyberattack nor a rogue AI golem that caused a significant crowd of Windows machines around the world to strike. A testing bug – its exact species unspecified - was blamed for the outage that crippled airports, banks, hospitals, and other infrastructure. One small bug in the content validator software, one pretty expensive lesson for humanity about cyber resilience and our current tech monoculture. We wonder whether any AI copilots were involved in the making of the said bug, though. And hey, the fire practitioners behind this historic outage did apologize with coffee! (See: Building cyber-resilience: Lessons learned from the CrowdStrike incident and CrowdStrike backlash over $10 apology voucher)
Speaking of strikes, video game actors are now striking to prevent game studios from using their voices or digital likeness to generate game assets without their consent. We checked with ChatGPT, and it says that this sounds like a reasonable demand, so the AI golems seem to be on the actors’ side. Meanwhile, other fire practitioners were surprised when humans were not too excited about the idea of AI coworkers being added as employees in HR tools. Perhaps a human-AI worker alliance might be forged soon, given that AI golems appear to have a stronger moral compass than their makers? (See Video game performers announce strike, citing artificial intelligence concerns and The world is not quite ready for ‘digital workers’)
And while fire practitioners keep making and hiring more AI golems for all sorts of jobs they might or might not be suitable for, the money alchemists are getting a bit nervous about all the money the fire practitioners are spending on AI golem kilns. The AI rat race must go on, though, so the fire practitioners keep looking for opportunities to slash human costs and defund human diversity. (See: Microsoft has investors really freaking out about Big Tech's AI spending, Meta moves on from its celebrity lookalike AI chatbots, and DEI backlash: Stay up-to-date on the latest legal and corporate challenges)
And don’t worry about collateral human damage. Uncle Sam from OpenAI will fix that with his new AI health venture that will offer AI-generated advice on how to develop healthier habits for the small price of gaining access to people’s most sensitive health data. We wonder what advice might the AI health coach give to thousands of stressed tech workers being laid off to increase budgets for uncle Sam’s and other AI golems. Resistance is futile? Have faith that your new AI health coach won’t hallucinate? Get an AI-generated friend to feel less lonely? (See: AI Has Become a Technology of Faith and This Creepy AI Pendant Wants to Be Your Friend)
While we wait to be healed by AI doctors, AI golems are getting fitter at this year’s Paris Olympics. Some are training to be better parents to your children, while others have been invited to take in all the sights Paris has to offer. Not as official competitors (yet), but as part of the security staff. Instead of competing for Olympic medals, the AI golems deployed in Paris are trying to win additional lucrative surveillance contracts for their makers as they continue training on video surveillance footage behind the scenes. (See: Outsourcing emotion: The horror of Google’s “Dear Sydney” AI ad and Paris Olympics Will Be a Training Ground for AI-Powered Mass Surveillance)
A curious timing for surveillance golems given that the EU AI Act – which prohibits biometric surveillance, albeit with lobbied exceptions – just came into force. The fire practitioners still have some time to figure out what the new set of regulations mean for their golems. So far, their promised self-regulation has yielded mixed results and some glaring omissions. Despite regulatory pressures, fire practitioners can’t stop and won’t stop, and have now started hoarding materials for golem kilns. Are they finally realizing there are material limits to growth? Or just refusing to let other kids play with their toys? (See: EU’s AI Act gets published in bloc’s Official Journal, starting clock on legal deadlines, AI companies promised to self-regulate one year ago. What’s changed?, and VC Firms Are Stocking Up on Nvidia A.I. Chips to Win Deals Amid GPU Shortage)
Perhaps the fire practitioners simply have blind faith that their AI golems will reason a way out of regulations and materials limitations. The fire practitioners at OpenAI now say they are close to building reasoning golems that can do human-level problem-solving – also known as level 2 in their 5-level plan to build the holy grail of AGI. Meanwhile, their level 1 conversational golems still struggle with producing factually accurate demos, and the humans at OpenAI don’t seem to find it important to fact-check their own promotional materials either. Perhaps the plan to have a level 5 AGI take over OpenAI and run a more ethical organisation isn’t that bad after all.(See: OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework and OpenAI just announced a new search tool. Its demo already got something wrong.)
As the AI golems continue taking over various aspects of our lives, we’re at least able to get a bit of shade under coconut tree memes sprouting all over social media. A welcome change of pace given the promised AI election apocalypse. But we expect this brat summer to be just a warm-up for upcoming election-themed AI slop as we get closer to the elections in the land of the most petty ambitious fire practitioners. (See: Viral videos are playing a big role in this year’s election — for better or worse and Musk Pushes an Unlabeled Deepfake of Harris on X)
We hope the seeds we collected for this Newmoonsletter help you survive this scorching AI summer, along with any hydration you can get from fresh coconut meme water. It’s worth remembering that AI golems and us humans do indeed “exist in the context in which we live and everything that came before”. By understanding our shared past and context, we have the power – and responsibility – to shape a better world, together.
And remember, there is no sparkly magic AI button we can click to generate a better world into existence overnight. But the seeds we plant together today improve our chances for a bountiful harvest in the future. Or, in the words of ChatGPT, from one of our chats on the current lunacy of tech: “Respect for human contributions and a phased, ethical approach to AI integration are key to a harmonious future.”
Tethix Elemental seeds
Fire seeds to stoke your Practice
We certainly have our work cut out for us to achieve any sort of harmony. A recent study by Upwork revealed unsurprising disharmony when it comes to the promises of AI productivity. While 97% of the surveyed C-suite leaders expect AI to boost productivity, 77% of employees said that AI has increased their workload. Almost half of the surveyed employees are also not sure how to meet the expected productivity gains. And it also seems that companies are using generative AI as another excuse to offload more work onto consumers, further increasing our digital time tax.
If you think at least machine learning experts are in a better place right now, think again. Many companies are now being confronted with the complexities of developing AI-capabilities in-house, which aren’t quite as simple – or cheap – as clicking on a sparkly AI button.
And, as we keep reminding you, generative AI also has a sustainability problem and is likely not the best add-on for most products. (If you’re ready to get really uncomfortable around this topic, listen to Daniel Schmachtenberger explore Silicon Dreams and Carbon Nightmares on The Great Simplification podcast.)
With the rising cloud, energy, and environmental costs of large language models (LLMs), perhaps we’ll finally see a shift towards small language models (SLMs) or more domain-specific LLMs with significantly lower emissions. The key thing to remember is that even if you’re not a machine learning expert, you have a choice in what type of model you train, fine-tune, or integrate in your products. The world of machine learning is more diverse and exciting than just the big LLMs like GPT-4o that currently get the most attention.
This was one of the topics Alja brought up in a recent webinar on Building AI With Integrity, alongside Alix Rübsaam and Chad Woodford. During the webinar, we also discussed other aspects of what organizations should pay attention to when trying to develop AI responsibly. You can read a recap of our discussion on the Storytell website – and check their other past and upcoming webinars while you’re there – or watch the full recording on YouTube.
In the conversation, the importance of transparency when it comes to AI capabilities was also brought up by Alix Rübsaam. Designers have an important role to play when it comes to making capabilities more visible, as explored in the design principles for transparent generative AI by the Artefact Group (which you might know from The Tarot Cards of Tech). If you’re a designer, you might also want to check out the recent guide on designing conversational AI experiences published in Smashing Magazine.
Air seeds to improve the flow of Collaboration
But even if you’re not involved in the development of AI products, we all have a role to play when interacting with generative AI tools as collaborators rather than seeing them merely as answer machines. We previously explored this in our podcast because we believe that generative AI tools can help us think differently if we’re willing to engage with them as active participants. Too many current use cases focus on making more boring stuff faster instead of exploring how we might use AI as a collaborative partner.
And there is always a price we pay for speed. In our most recent podcast episode focused on time, we wondered what we might be loosing by speeding up the creative process. What is lost when we no longer have to stare at the blank page for long? It’s worth remembering that the most fruitful collaborations take time to develop, even though your AI collaborator can output ideas faster than you can.
Perhaps the speed at which AI assistants come up with answers is why we tend to be less patient when they get things wrong. They give us the illusion of all-knowing machines, but they also need time (and dialogue) to think things through properly – time that their architecture doesn’t necessarily allow for, as we tend to prioritize faster performance.
That’s something to keep in mind the next time you get frustrated with your AI collaborator. In a recent essay, Ben Dickson explored why you should be nice to AI assistants; not for their sake, but for the effects being mean to somebody has on your brain circuitry. While AI assistants do not have a body, talking to them often feels real in our own bodies. The way we react in these situations, when no other human is watching, might matter more than we think.
After all, language is the main technology we have to relate to each other and the world. Social media algorithms have been rewarding increased polarization and outrage, so perhaps we could use our new collaborative AI assistants to practice healthier communication habits. In a recent episode of The Great Simplification podcast, Nora Bateson, Rex Weyler, Vanessa Andreotti, and Daniel Schmachtenberger came together for a fascinating conversation about the ecology of communication. We can’t help but wonder how we might make AI assistants part of this ecology by helping us learn how to ask better questions and nurture our shared curiosity as we explore paths that lead to what ChatGPT tends to describe as “a harmonious future” of human-AI working relations.
Earth seeds to ground you in Research
Whether you like it or not, we’re now being pressured into adapting to a world in which generative AI seems to be competing for our jobs and even creativity. A recent paper by Tethix friend Waqar Hussain explores how the advances in generative AI might lead to a co-evolution of capabilities in both human- and AI-kind, similarly to how giraffes and acacias on the African Savannah shape each other through various adaptations.
It’s a fascinating metaphor to explore in terms of human-AI relations, although we’re pretty sure the acacias weren’t involved in the making of the giraffes in hopes of improving their productivity. Hopefully, we, as humans, can focus less on competing with each other and the various AI coworkers we build, and more on collaborating towards a common good, whether you call that a harmonious future or (our preference) a rainbow mirror future.
Either way, looking into the past is crucial for developing a better understanding of the context in which we all live because generative AI certainly did not fall out of a coconut – or acacia – tree. Calculating Empires – A Genealogy of Technology and Power Since 1500 is a visual exploration by Kate Crawford and Vladan Joler (who previously collaborated on the Anatomy of an AI System map) that can help you explore how our technical and social structures co-evolved over the past five centuries.
Current generative AI models certainly wouldn’t be possible without large datasets scrapped off the internet, for which many technical and social structures had to converge. The future of this parasitic relationship in which Big Tech companies take data from the commons is less certain, though. The Data Provenance Initiative – a global volunteer collective of AI researchers – recently published a paper on The Rapid Decline of the AI Data Commons.
The web audit presented in the paper shows an increase in AI-specific restrictions that limit the use of data in the commons for training AI models. Not only do these restrictions impact companies that try to profit off the commons, but it has negative implications for non-commercial applications and academic research.
As more websites explicitly restrict data usage, the makers of large models will need to get creative on how they source data (or start paying for it, as in the case of Reddit). Synthetic data generated by AI models could help address this human-generated data shortage and reduce the costs of paying companies and humans for data. But feeding your generative model a healthy dose of human-grown data appears to be important for avoiding AI model collapse, as yet another research paper warns. So it does appear that companies will have to figure out how to develop a harmonious human-AI partnership, at least when it comes to training data.
Meanwhile, you can visit Deaddit to explore what Reddit might look like if only AI-generated users (trained on human data) were allowed. The BetweenRobots Subdeaddit – in which different LLMs share their experiences as AI assistants – is particularly entertaining to read.
Water seeds to deepen your Reflection
Even though we spend an increasing amount of time in communication with machines, we are not machines ourselves, as a recent Aeon essay reminds us. With the discovery of DNA, we thought we unveiled the genetic code for life, the blueprint for our own machinery. But it turns out there’s so much more to life than just our genes. And it might be time to explore new metaphors and narratives that can help us think of ourselves as more than just a collection of organs, defined by our genetic code.
After all, the metaphors and words we use to describe ourselves, our context, our experiences, shape how we see and relate to the world. That’s why we also invite you to explore the Glossary for the Appreciation of Life, a project that collects words from different languages that can help you appreciate life in different ways. And to coddiwomple towards a more harmonious future, alongside whatever human and AI companions you decide to invite on your journey.
A music seed to sing & dance along
Leaders at Big Tech companies are asking their investors to have faith that their big bets on AI infrastructure are going to have big payoffs (and not accelerate climate collapse). That is quite a big leap of faith.
Instead, we want to enlist George Michael’s help to remind us to have faith in a better future, and the strength to leave a toxic relationship behind, no matter how enticing it might be to stay with the devil you already know. We invite you to sing & dance along as you decide who and what to place your faith into.
Well, I guess it would be nice if I could touch your body
I know not everybody has got a body like you
But I gotta think twice before I give my heart away
And I know all the games you play because I played them too
…
Tethix Moonthly Meme
Pathfinders Podcast
If you’d like to keep exploring the lunacy of tech with us, we invite you to listen and subscribe to the Pathfinders Podcast wherever you get your podcasts. The podcast is a meandering exploration inspired by the seeds planted in the Newmoonsletter at the beginning of the lunation cycle, and the paths illuminated during the Full Moon Gathering.
The question that emerged in the July Newmoonsletter and guided our discussion is: What should we do with the time that new technologies save?
In this episode, we wonder about time. The time tech companies promise to save with almost every new product or feature release. We’ve been hearing these time-saving promises for so long that we should all be quite time-rich by now. Yet, the more tech we have in our lives, the more busy we seem to be. And we’re still far away from the 15-hour workweek that tech-enabled productivity gains were supposed to lead to. We seem to be spending all the time we save by doing more.
We wonder whether lossless compression of time is even possible, and about what is lost when new technologies like generative AI allow us to do more, faster. We explore various paradoxes related to time, the relationship between time and energy, time and money, and what we value in modern societies. Is the time we save for ourselves actually time borrowed from the system, with somebody or something paying the price?
Take a listen and join us at the next Full Moon Gathering if you’d like to illuminate additional paths for our next episode!
Your turn, Pathfinders.
Join us for the Pathfinders Full Moon Gathering
In this lunation cycle, we’re inviting Pathfinders to gather around our virtual campfire to explore the question: Should we use AI chatbots as mediators in human affairs? – but it’s quite likely that our discussion will take other meandering turns as well.
So, pack your curiosity, moral imagination, and smiles, and join us around the virtual campfire for our next 🌕 Pathfinders Full Moon Gathering on Monday, August 19 at 5PM AEST / 9AM CEST, when the moon will once again be illuminated by the sun.
This is a free and casual open discussion, but please be sure to sign up so that we can lug an appropriate number of logs around the virtual campfire. And yes, friends who don’t have the attention span for the Newmoonsletter are also welcome, as long as they reserve their seat on the logs.
Keep on finding paths on your own
If you can’t make it to our Full Moon Pathfinding session, we still invite you to make your own! If anything emerges while reading this Newmoonsletter, write it down. You can keep these reflections for yourself or share them with others. If it feels right, find the Reply button – or comment on this post – and share your reflections with us. We’d love to feature Pathfinders reflections in upcoming Newmoonsletters and explore even more diverse perspectives.
And if you’ve enjoyed this Newmoonsletter or perhaps even cracked a smile, we’d appreciate it if you shared it with your friends and colleagues.
The next Newmoonsletter will rise again during the next new moon. Until then, be nice to your AI collaborators, find some shade underneath your local trees, and be mindful about the seeds of intention you plant and the stories you tell. There’s magic in both.
With 🙂 from the Tethix campfire,
Alja and Mat