AI and Questions
My site here is hosted on the Ghost Platform and their weekly newsletter was about some AI options creators can use. I knew about some of them, have dabbled with some of them and this image in the newsletter reminded me a lot of one I saw when social medi
My site here is hosted on the Ghost Platform and their weekly newsletter was about some AI options creators can use. I knew about some of them, have dabbled with some of them and this image in the newsletter reminded me a lot of one I saw when social media emerging was just emerging. I'm being a little rude here an assuming everyone knows what AI is - just in case, here is some excellent background reading.
These products are here with us now. We all knew AI was likely to grow in strength some years ago (the weak signal), and now it's becoming real (a 'trend' of sorts). That's how futures emerge. Some of these products will survive into those futures, some won't. Eventually AI will be mainstream in some way - it already is for many people.
What decides survival and transition to mainstream? Not the trend itself, but rather the intersecting global developments that have provided the capacity for AI technology to develop, accompanied by real concerns about privacy and surveillance, and and underlying unease (at least for me) about giving a machine my voice so I can turn these posts I write into audio at a cost of $0.006 a second. My voice is my own now, if I give it to a machine in a company with a vague privacy policy, where might it end up? And do I care? I'm thinking about surveillance capitalism and the surveillance state here - using this AI service might (can) make my voice another form of data to be used by organisations and government to predict how we will act and buy.
I am a bit of geek, I like new gadgets and apps and like to try them out, but for the first time, I'm wary. Is this a future I want? Or do I want to avoid it? I know that AI is being used for many positive purposes (healthcare for example) but even when the human enters the equation, it all too quickly becomes business oriented, turning the human into an input or suggesting we need to alter our brain cells to keep up with AI. There's a 'never' in this second link, a word we shouldn't use in futures thinking.
Let's think a bit more expansively on a general level.
What are AI's potential impacts over the next 10-20 years? Where is the human in AI? What global changes are influencing the trajectory of AI and what will that collective impact be today and the coming years. Who is developing AI and can I trust them?
What action can I/our organisation take today to promote the advantages and mitigate the negatives of this emerging future while ensuring the human remains at the centre of our futures? Or is it too late to do anything at all?
These are the sorts of questions we ask first to understand the present in new ways. Thinking more expansively by asking these sorts of questions allows you beyond the unblinkered acceptance of AI as a certainty and opening your minds to thinking about its impact on you, your family, your society and the world. And yes - you can think that broadly.
Who is developing AI and what are their assumptions and biases, and where is the human overtly driving AI as opposed to technology are my two issues - what will be the social impact on us? Here are some cases already identified of AI bias.
Where does your thinking on AI take you? Your first reaction is that you can't influence AI but you can by considering your choice carefully. Do you want your voice all over the internet not knowing where it will end up or what purposes it will be used for?
Reflect on the questions in the post - AI is real, it is affecting us, it doesn't work perfectly and while our ability to protect our privacy might? already be gone or going, we can care about where AI ends up - especially among those of us who tend to take AI for granted as we did we with social media - accepted then challenged when it clashed with human values - only then did the more human centred alternatives begin to emerge. Do we want to wait that long with AI? Or is it too late?