We recently met up with Hossein Derakhshan, an Iranian-Canadian author, researcher, and public speaker, as well as the pioneer of blogging, podcasts and tech journalism in Iran. In this conversation, Hossein Derakhshan and Jacob Grech discuss personalised media consumption, ‘malinformation’, and the nature of platforms.
Jacob Grech: Hossein, thank you for speaking with me. I want to start by asking what projects you’re currently working on.
Hossein Derakhshan: I am currently working on a couple of things. One is my PhD project, which is about the implications of personalised media consumptions on perceptions of the collective. Specifically, I am focusing on audible forms of media: music, podcasts, audiobooks. By examining people’s listening habits, I aim to observe when listening becomes a personalised experience and detaches people from their surroundings. Since these media are increasingly consumed through our earphones and headphones, there is also a physical detachment from the public. I am looking at the implications of this personalised experienced for ideas of society or the collective. For instance, what happens to a society in which many people listen to music that no one else is listening to, where just decades ago, listening to music was largely a shared experience.
Much of the idea of society was built on shared experiences of media consumption. Watching popular television shows, sports matches, or presidential election coverage: these are experiences that connect many of us. This argument is not new. It goes back to notions of the emergence of nation-states through media consumption, as in Benedict Anderson’s work on imagined communities. Shared experiences over decades and centuries have established our contemporary communities, and without them I wonder what may happen to the idea of both imagined and real, community and collectivity.
That’s my PhD project. On the side, I’m following a few other avenues. One of these concerns information disorder, which I have already published on. Malinformation is my own innovation, and I am currently engaged in trying to emphasise why it is important. It refers to genuine information whose context is changed, usually to inflict harm on or manipulate someone. This is a definition applicable to experiences many people are having now. For instance, I propose to define deepfakes as a combination of disinformation and malinformation.
The best example of malinformation would be when you change the context of a photograph to give the impression that it has taken place in another time or place. We’ve seen many instances of this being used in the Syrian Civil War or in the Ukraine War. Another case is revenge porn, where the change of context from private to public is the sign of malicious intent.
I recently wrote an article for WIRED Magazine about how CCTV cameras are being used to manipulate people. If you only change the time label on CCTV footage, this can be highly manipulative. The case I describe is the death of a young Iranian girl who had participated in the ongoing protests. The state identified this as suicide, producing CCTV footage as evidence. It is not clear to me whether it was completely staged, which would put it into the disinformation category, or if the footage is genuine and solely the time label was manipulated.
This is a highly significant issue. CCTV footage is a medium widely deemed trustworthy. It is the backbone evidence in many court cases. I don’t think people should have so much trust in CCTV, given how easy it is to manipulate or even fabricate them. I am sure that there will soon be AI generative tools capable of creating highly convincing footage from scratch.
I suggest that because all sources of truth have been proved manipulatable, what we should do today is, rather than encourage people to discover a few sources of truth in respectable legacy media, promote universal scepticism in dealing with information. We have seen examples of how even highly respected organisations can fall for state-run manipulation operations. We are at the 20th anniversary of the Iraq War. Credible legacy media outlets, including The New York Times, were duped by American intelligence agencies’ claims regarding Iraqi weapons of mass destruction.
Given these examples, it seems that to encourage a blanket scepticism is much less risky than an uneven distribution of trust. This is particularly true now because it is so easy to fabricate, for instance, a BBC News report. By the time the BBC itself has managed to convince people that said footage was completely fabricated, it might have already had the desired impact of causing harm, perhaps on a significant scale.
JG: As an EdTech foundation, 3CL’s primary focus is media literacy, and this extends to developing policy proposals. What you suggest is that our starting point should be to emphasise the risk that online content is designed to harm you.
HD: Exactly, and this is only going to become a more pressing concern. Another of my focuses is a more methodologically focused work on the nature of platforms today. There are three key principles to take note of. Firstly, the algorithms are invisible, offering little to no transparency for the average user. Secondly, the novelty of the stream as a personalised, constantly changing medium. They are hyper-modulatory. Whereas TV series, newspapers, or articles offer delineated, unchanging content, this is not the case for the stream. This has methodological implications for academic research. When you interview people about their news feeds on Facebook or Twitter, they only give you an answer based on that moment, or what they’ve experienced maybe recently. You would not necessarily get the same answers if you had conducted the interview two hours earlier or later. This does not represent the continuum of their experiences and marks the first time I think that media communications are supposed to study a very unfixed and constantly changing object of study. Finally, the stream is inextricable from the other work that platforms are engaged in.
JG: The stream is also portrayed as progress, despite its implications for assigning accountability for published information. The gatekeepers of this information are largely the computer engineers responsible for designing stream-based platforms. They are the ones negotiating the ethical dilemmas of publishers.
HD: Yes. I argue that we can benefit from approaching these platforms through with tools of ethnomethodology. Harold Garfinkel’s work remains key here. The method of the ‘breaching experiment’ helps reveal how people experience these platforms. Instead of asking interviewees how they feel about, for instance, the gendered aspects of their Twitter feed, a researcher could ask them to switch their phone with someone else’s and absorb their feed. This would enable them to transcend their everyday experience. By disrupting the familiar, the breaching experiment provides a way for researching to identify the underlying conventions that we all agree on without formally acknowledging.
JG: I would like to connect this three-pronged model for analysing the stream to your work on mass personalisation, and particularly podcasts. You distinguish between an older, freer and more decentralised internet, and the mass-personalised model that people of my generation have grown up with. For most of us, the blog is peripheral in our understanding of the internet. Podcasts, on the other hand, seem to have an intergenerational appeal as a means to transcend your intellectual comfort zone. Possibly because it is a longer form of content, there seems to be the presumption that podcasts offer a space for critical, nuanced discussion. Even if you are listening to a creator whose values you associate with, there is an expectation that they will introduce you to dissenting voices.
HD: Another area I have been working on is the future of journalism. One prominent argument connects the rise in the nineteenth century of conventional journalism, revolving around news, with the growth of the middle class, and parallel processes of globalisation and technological change. This goes back to James Carey’s thesis. I argue that what we are experiencing now is a crisis of journalism that is a direct result of the decline of news as a relevant cultural form.
The news used to function as a means for giving people a sense of globalised experience. This is no longer the monopoly of journalists. It also offered a source of drama. We now have much better sources and much cheaper sources of this, too. What we need to do, then, to save journalism is to provide it with an alternative focus to news. My answer is in what I term factual affective narratives, of which podcasts are a good example. There is a proven market for longer-form content that is written in a different language to news coverage but remains journalism. People are willing to pay for podcasts, to watch documentary theatre or films based on historical events.
Why affective narratives? The word ‘affect’ generates vulnerabilities in terms of factuality. Sometimes, authors of these narratives exaggerate or neglect details for dramatic effect. Of course, this is a feature of conventional journalism, too. Podcasts, as an emerging form of journalism, are certainly vulnerable from the standpoint of factual content, but on a practical level, there are steps that platforms could take. All platforms that carry podcasts could enable users to add public feedback and point out factual inaccuracies, with this being algorithmically prioritised over general comments. A good example of how this might work is Wikipedia, which has enforced a high level of factuality by compelling contributors to source their claims, at least for controversial or contested articles.
JG: Looking at the next generation of online platforms, what proposals do you have for regulators seeking to deal with proliferating untruths?
HD: Platform neutrality would be key. What this means is the separation of the core code of the platform from the algorithms and user data. User data is already separated from the other two layers, as is required by legislation such as the EU’s General Data Protection Regulation. My proposal is to detach platforms’ core code from their algorithms. This would encourage the growth of a whole market of third-party algorithms to use on platforms. This would generate competition, encouraging transparency and accountability for platform creators. It would also create many new commercial opportunities for algorithm developers. Think of an algorithm for navigation apps that would optimise low-emissions travel, or direct users to independent businesses and off-the-beaten-track destinations on their journeys.
JG: What you are proposing is a more attractive and workable alternative to the increasingly prominent calls to break up platforms on the basis that they are monopolistic.
HD: Exactly. No one can deny that platform neutrality is a liberal proposal. It avoids criticisms of state overreach, and instead means using capitalist logic against itself to challenge the monopolies it has created in the digital space.