Ali Alkhatib was frustrated that the tech reality he was experiencing and could see was not being put out there adequately. The researcher, working at the intersection of AI and society, has been closely studying digital culture for a decade.
The former director for the Center for Applied Data Ethics at the University of San Francisco has been pointing out the problems in technology – from how gig work disempowers workers to the power that AI wields in getting them increasingly unhinged from reality.
“We allow these systems to make decisions without anyone thinking about what the consequences are, which have dysfunctions totally unaccountable to people,” stated Alkhatib in an exclusive interview with AIM. He believes that in today’s age where AI is present in our daily lives, the main principle is to drag back into our discourse that these systems are fundamentally an organisation of society.
“If a [technological] process will yield airstrikes on people, cause life altering harm or keep them from opportunities, those are the decisions we absolutely can’t tolerate. If a company wants to get into the business of selling those decisions, then as a society, we need to stop that from happening,” said the researcher.
Naturally, that drifts into an admittedly less technological and more anti-authoritarian way of thinking about AI and other technologies. He clarified that he likes algorithmic systems to organise things in his life. “I’m not a person who fundamentally rejects everything technological by any means,” he mentioned.
But he urges that “if the technology is going to cause harm, we need to be able to just say no, regardless of whatever set of knowledge or data it operates on”.
Events of the Past
Alkhatib’s research is a result of the question he has long pondered upon: How do people respond to the feeling of systematic oppression when they start to realise it?
The cultural anthropologist explained that, “We have encountered these problems in the past and we can deal with them again.” Through his work, Alkhatib wants to give people some idea of the systemic power dynamic that’s going on and help them understand that this is not about independent things.
He recalled that a decade ago, gig workers, who were frustrated with these systems, realised they’re not doing anything wrong. “The system was functionally designed to ensure that they can’t possibly profit because Uber needs to profit and earn every penny. Seeing this, gig workers started doing bizarre and interesting things with their apps to trick the system into doing what they want,” he recollected.
“Similarly, today YouTubers and TikTokers use slang to avoid the content monetization system from catching them. They use clickbait thumbnails to appeal to the algorithm not to people, because there is that recognition that if the algorithm likes their content they can pass through that system relatively unscathed,” he explained.
Elaborating on the flaws in the reward system, he explained, “Everybody wants a 15-second bit meticulously designed for the algorithm, and I can’t blame people because they’re responding to the social system that rewards those bits at the exclusion of the quality of the whole song. Such moments of absurdity are not the result of a person but the system being stupid.”
Communicate, Not Generate
The latest human-mimicking technology already has and continues to seep deeper in our lives. “Generative AI is going to become a tool that managers use to push human-like work of worse quality, more quickly, for less money,” Alkhatib predicted.
“They will always be able to threaten the prospect of using generative AI to do your job wrong and untrue,” he added. “As people become more sensitive to the inauthenticity of algorithmically generated content,” he hopes that people look into the gaping hole that AI can produce words, but doesn’t say anything.
Last week, ChatGPT creator, OpenAI’s chief Sam Altman said that 100 billion words are being produced each day. “Generating words is nothing like people communicating,” pointed out Alkhatib.
He went on to quote linguist Emily Bender: If somebody couldn’t even be bothered to say it or write it. Why should I be bothered to read it? “That wasn’t my thought, but I liked the sentiment,” he added.
Alkhatib hopes that there’s a popular push against generative AI or at least towards people claiming authorial sort of ownership of the things that they put out into the world. We need to not value garbage produced at mass.
Pay More Attention
As per the researcher, there are two important things: One is to recognise that the technology you have an okay or even good experience with might be harmful to other people. The next thing is to get them to think that the decisions people close to them made in the past are now being made by technological platforms.
For instance, he stated that Google not just knows about your schedule as a map guide, they’re in your social, cultural and political life.
“Even in your culinary life, since you search for restaurants, recipes, etc. Google benefits from that because they accumulate the power to decide what places to show and where to send you,” Alkhatib noted.
He believes that the ability to pull away from a system when it starts to do stupid stuff is the fundamental bedrock of having a productive relationship with it. “We’ve already gone so far beyond the line of what’s reasonable or acceptable that we can’t even see it anymore,” he said.