Taxi app Uber has announced a new
‘quiet mode’ for customers using its premium Uber Black service. By
selecting the option via the app, users can order a cab where the driver is
instructed not to talk. While this change has proven positive with many users,
some taxi drivers have responded negatively to the new quiet mode, with some critics
treats taxi drivers more like robots than human beings.
While these critics may certainly have a point, they miss the essential fact that all taxi drivers – and indeed, all humans being – behave, and are encouraged to behave, in a robotic fashion. This blurring of the human and the machine isn’t really anything new, but rather, has been going on for a very long time indeed.
It’s been a while since my last PhD update, and a lot has been
happened since my last
blog in February. Since I last posted an update I’ve published a journal article, approved a proof on a book
chapter, and even appeared on national radio! And this isn’t even to mention my
podcast, website work and the small matter of my thesis…
It’s all going on!
There is a crisis coming in academia. It’s been looming on
the horizon for quite some time, and now threatens to bring the profession into
That problem is AI.
For many years now, AI have been used to power chat bots and
digital assistants such as Cortana, Siri and Alexa. Over the years, these bots
have become far more nuanced and complex. While these systems aren’t
intelligent in the same way as a human being, they do a fairly good job at
mimicking human behaviour, and convincing us that they are ‘real’.
Indeed, these technologies are now so convincing that it
won’t be long before they are put to nefarious use. It’s already been shown
can write convincing news articles, and it won’t be long before they are
used to write academic essays, even more complex works such as research papers
and even full-length publications.
Make no mistake, this is a serious issue, and one that needs to be taken seriously. In the next few years, AI-powered essay mills have the potential to shake academia right to the very core.
Humans will always make the final decision on whether armed robots can shoot, according to a statement by the US Department of Defense. Their clarification comes amid fears about a new advanced targeting system, known as ATLAS, that will use artificial intelligence in combat vehicles to target and execute threats. While the public may feel uneasy about so-called “killer robots”, the concept is nothing new – machine-gun wielding “SWORDS” robots were deployed in Iraq as early as 2007.
But our relationship with military robots goes back even further than that. This is because when we say ‘robot’, what we really mean is a technology with some form of ‘autonomous’ element that allows it to perform a task without the need for direct human intervention.
‘Fantastika’ is an umbrella term that embraces the genres of Fantasy, Science Fiction and Horror but can also include Alternate History, Gothic, Steampunk or any other radically imaginative narrative space.
The sixth annual Fantastika conference will aim to define, challenge and debate conceptualisations of embodiment. We seek to investigate how various bodily forms are addressed or ruptured across a myriad of canvases, whether it be through (re) construction, transposition or indeed destabilisation. The conference will diagnose how Fantastika texts may extend upon or confront definitions of what it even means to be ‘embodied’, inviting researchers from fields such as posthumanism, medical humanities and other relevant fields to collaborate through productive