Good morning!
In this week's dispatch:
Finding tasks for and with AI
Critical friends - Helen Beetham and Audrey Watters on air
Finding tasks for and with AI
A theme I've been noticing recently is that of AI as a team member that can be given particular tasks. This theme touches on group work in education, integration into our daily workflows, the creation of so-called agents, and ultimately whether or not the entire economic project of genAI will succeed.
In education, I first came across the idea of conceptualising genAI as a 'group member' in Kate Tregloan and Sarah Song's paper at ASCILITE 2024 "From How Much to Whodunnit: A typology for authorising and evaluating student AI use". They argue it allows for certain tasks to be allocated to genAI as a team member with a particular function in the dynamic. It also enables the creation of rubrics that incentivise students to disclose how and why they used genAI.
When it comes to how we integrate genAI into our workflows, Anthropic have recently released a report that sheds light on how its free users are using the Claude chatbot for work. Although it's called the Anthropic Economic Index, it's really all about how people are finding useful tasks for and with AI.

It's called the economic index because this activity underpins the (still open) question of whether genAI will generate enough value for individuals and companies to be worth the enormous costs. Behind every free genAI tool is an eye-watering amount of investment that will want a return sooner or later. In business-speak, tasks with value are called "use cases".
As Mollick has been arguing for a while now on his Substack One Useful Thing, individuals have been trying to discover the use cases because companies released the AI tools without an instruction manual. This means collectively we struggle with how to move from personal productivity to organisational productivity, leading to tension within organisations.
The next level of this is the hype around 'agents', a concept whose definition is still in contention, but which is of great interest to organisations. Anthropic's blog post about Building effective agents helpfully distinguishes between workflows (a pre-defined number of tasks in a sequence) and agents (goal-oriented tools where the exact number and sequence of steps are not pre-determined). A host of companies, from juggernauts like Salesforce to countless new startups, are racing to build useful and reliable agents because no doubt they can see dollar signs behind the idea of mechanised employees who will never need a break or join a union.

For organisations struggling to adapt, the benefit is that it sidesteps the messy problem of integrating AI into existing job roles and workflows. Don't you realise how expensive and difficult large-scale transformation projects are? And that the people part is the most complicated? Save yourself the headache! Forget about giving your staff access to AI tools or spending time learning to use AI, just replace them with out-of-the-box 'agent' employees that already know how to use AI because they are AI.
For my part, I see some hopeful signs of a middle path where teams are finding ways to integrate AI into what they do at the workflow or task level. It requires leadership to trust and empower them with access to tools, and individuals to spend time learning and understanding their capabilities.
I want to call out examples of this from the AI in HE Symposium. Two talks from different learning designers demonstrate what it looks like when skilled experts learn how to integrate genAI effectively into their workflows - in this case by prompting custom GPTs to perform specific tasks.


The race is on for both individuals and organisations to find effective use cases for genAI, from tasks to workflows to agents. The extent to which this will be possible will I think determine the long-term future of AI, and whether it will live up to the much-hyped transformation of how we work and learn.
Also at stake is what that future will look like. Will it be the replacement of skilled salaried humans with subscription-based AI agents? Empowered humans using their talents to make the world a better place, sometimes with AI support? Or will it be neither, and instead the world economy will come crashing down?
My vote is for empowered humans, for the record.
What tasks have you found for genAI in your workflows? Do you get enough value out of AI to pay for any tools? Reply or comment to let me know.
Critical friends - Helen Beetham and Audrey Watters on air
One antidote to hype is time spent hands-on with the object in question figuring out what tasks it can do, and another is listening to smart people who aren't having a bar of it and can provide valuable historical, political and cultural context. I can't think of any two people in the educational technology space who do this better than Helen Beetham and Audrey Watters. They've released two episodes in conversation on Helen Beetham's Imperfect Offerings podcast.
It takes them a little while to hit their stride in the first episode. It's a wide-ranging overview of the critical positions and critical issues around generative AI in society, and education in particular. They both maintain an unflinching and unwavering critique of the entire project of the marketisation and automation of education.
In the second episode, they start to focus a bit more on specific issues. While it’s lovely to hear how much they enjoy talking to each other and are in agreement about the various ills of big tech, I look forward to more of them setting their sights on particular topics going forward. For example, I doubt the tech evangelist blog posts of the big tech CEOs and venture capitalists would hold up for long at all under sustained critique from these two, and I'd love to hear that conversation. (Maybe in the style of the entertaining and enlightening If Books Could Kill podcast which takes an intellectual sledgehammer to "The airport bestsellers that captured our hearts and ruined our minds.")
If you’re after more like this, Tim Klapdor includes Imperfect Offerings in his roundup of recent tech critique listening recommendations.
Dispatched
That's all, folks. See you next week.
Antony :)
Thanks for reading. Tachyon is written by a human in Perth, Australia.
Subscribe to receive all future posts in your inbox. If you liked this post and found it useful, consider forwarding to a friend who might enjoy it too.
P.S.
What I’m 'reading' (in audiobook format)
I've finally joined the Patrick Rothfuss party and started listening to The Name of the Wind in audiobook format on Spotify. It's as good as they say. I probably should have listened sooner, but the series has stalled at the second book, so what was the rush?
What I'm listening to

As much as I appreciate Spotify's ability to surface new music I might like through its recommendation system, it can force you into a bubble. There is also an increasing risk of being served generic 'ghost artist' music or even AI slop. One way I try to resist the algorithm and support human curation is to follow NPR's All Songs Considered podcast. I've discovered many artists I love through their recommendations, and listened to tracks I normally wouldn't have. Hearing the critics discuss the latter in particular can give me an appreciation for (or at least a better understanding of) music that isn't really to my taste. They also post their human-curated lists to Spotify, and I've been enjoying working my way through their 124 Best Songs of 2024 playlist.