For many members of my generation – the so-called Generation Z – 2020 was the year when myths surrounding education systems were blown apart.
Against the backdrop of pandemic-related restrictions, fears exploded over the risk of credentials being ‘devalued’ by the adoption of alternative teaching and assessment methods and cancellation or replacement of exams. Frequent u-turns accompanied policies that exacerbated structural injustices in education systems. Three years on, members of my cohort are graduating from university amidst further economic and political uncertainty.
One among many sources of this is the explosion in the use and capabilities of generative AI. Tools including ChatGPT and Bard arrived slightly too late to transform my university experience. At my university, we initially received strict warnings against its use for coursework. Established anti-plagiarism tools are as yet largely unable to detect whether generative AI has been used. However, the UK’s Russell Group consortium of research universities has since issued guidelines for its integration into education. This does not necessarily mean moving lock-step with governmental regulators.
Generative AI is already prompting changes to teaching and examination methods across educational systems. This could mean incentivising a reversion to ‘traditional’ modes of assessment, such as closed book in-person exams, but equally encouraging reforms to ensure that assessments test critical thinking and argumentation rather than memory skills, for example. This is particularly relevant to primary and secondary education, which all too often negotiate large syllabi and time constraints by incentivising rote learning in preparation for summative exams. Take my own subjects, history and politics: there is clear potential to apply AI tools to make learning more interactive and focused on assessing competing arguments and case studies, for instance through the creation of specialised chatbots. As UCL’s Rose Luckin argues, the next months and years represent a crucial window to channel AI towards these public good objectives that enrich education by complementing human intelligence.
As members of Generation Z enter the workforce, the impact of new technologies on job prospects over the next several decades will only grow in political salience.
The first generation that grew up with the internet, we can now expect to face job markets reshaped by the application of AI. As of yet, however, concern appears subdued among my contemporaries. While AI and computer science constitute an increasingly popular field of study in public universities and many young people are clearly alert to the career opportunities they promise to unlock, fears over which jobs will prove most vulnerable to AI are also slowly rising. Which jobs are most vulnerable? How quickly will large-scale change come? It is still comfortable to believe that these changes won’t affect us directly, but rather unknown others, in different jobs or distant countries.
Fears over the impact of automation are hardly new, and are virtually synonymous with industrialisation since the early 19th century.
More recently, they have notably fed into political shifts in post-industrial regions. Consider support for Donald Trump in the American Midwest ‘Rust Belt’, or Marine le Pen in northern France among blue-collar voters. However, the improvement of generative AI tools stands to impact what were previously presumed to be secure, skilled occupations in the services sector – from graphic design to copywriting and journalism. It is important to recognise that insiders in these sectors are themselves divided over whether this is a good thing. Many do recognise the prospect for AI to serve as a useful aid, enabling more creative work and cutting out more mechanical elements of their jobs. Equally, it threatens to undermine established business models. Redundancies are already visible.
This debate feeds into the related question of AI’s impact on politics, and particularly elections in democracies. The ability of generative AI tools, developing at a pace since 2017, to promote the spread of disinformation and shape citizens’ perceptions is already clear, and can only be expected to improve in the near future. Identity theft and impersonation enabled by deepfakes constitute one avenue for this. In the UK, experts are warning that a recent cyber hack of electoral data could enable targeted voter manipulation, powered by AI. It is such security concerns that are partly driving current efforts to draw up comprehensive AI governance legislation, in anticipation of impending technological improvements. For more on AI’s relationship to misinformation and disinformation, see our conversation with author and researcher Hossein Derakhshan.
Where does all this leave young people today?
A recent flurry of publications have sought to draw together empirical findings to chart the impact that my digital-native generation will have on politics and society in the coming decades. In other words, assessing how different our priorities will be from those of our parents and grandparents. A recent McKinsey study concluded that Generation Z prioritises securing “stable, secure formal employment”, positing that young people today increasingly want to change institutions and society at large organically, from within. While these findings do substantially ring true with my own experiences and interactions with my contemporaries, they should not be taken to imply that Generation Z is disillusioned or demobilised. The millions actively engaged in climate activism and a host of other social and political causes should put that suggestion to rest. More convincingly, they might reflect young people’s impatience and dissatisfaction with politics and political avenues for achieving change, as well as their economic insecurity.
Psychology professor Jean M. Twenge, meanwhile, identifies the purportedly distinctive behavioural characteristics of American teens and young adults today, whom she dubs the iGeneration (where ‘i’ stands for individualism, inequality, insecurity, and internet), and the ramifications they will have over the coming decades. In particular, Twenge argues that growing up with the internet, and smartphones in particular, has isolated young people and contributed to them to adopt the trappings of adulthood – such as leaving their parents’ homes, getting married, or learning to drive – later than their predecessors.
The presumption that ‘my generation’ fundamentally ‘gets it’ is not new.
Nor, conversely, are the fears of older generations, in whose hands political and economic power is concentrated and thus shape educational systems and labour markets, that the young are ignorant, misinformed, immature, or dangerous. Twenge’s argument is compelling on the face of it, but also risks minimising the differences between Generation Z members, from subculture participation to class and nationality. The politicisation of generational divides today is arguably less significant than, for example, during the late 1960s, when large-scale student-led protest movements erupted across much of the globe. Generational analyses are necessarily imprecise and can verge on the formulaic, but this does not mean they lack value.
Is my generation aware of the risks associated with the digital technologies we have grown up with? Certainly many of us are, and think we are making mindful or critical use of these tools, or at least accept the associated risks in full knowledge that our personal data is being mined and commodified. In lieu of renouncing digital tools, we continue to view them as a net positive, and the basis for enhancing instead of diminishing our agency. That perspective is intergenerational.
So too is the need to invest in digital and information literacy and critical thinking through lifelong learning policies. For governments to get ahead of the curve but also be transparent in tech regulation would also be a step in the right direction. As my generation enters labour markets, but also as we become political actors – whether as voters, activists, or public servants – the potential for technologies including AI to cause intentional harm and disruption will only escalate. Generational divides mean that exposure and responses to these threats may vary depending on age (as just one of multiple factors), but the challenges themselves remain the same.