Meet Your New DumbAssistant™

Tõnis Tootsen

Dazzling examples of images and text generated by large diffusion and language models have invigorated dreams of a future where the problem called Life is headed for a final solution. Soon we’ll all have a super smart personal assistant who’ll manage everything! Scientists and philosophers are already racking their brains to figure out what to do in this Elysian post-work society – as if such a future were right around the corner.

Alas, such technologies are birthed into a world plagued by war and cultural obsessions – religious, political, economic etc. There are many reasons to lobotomize your smart assistant, but security is the handle that fits them all. In a time of autonomous information systems, communicated via natural language, anyone with a silver tongue can be a hacker. Which means everything we say and even think will fall under ever more scrutiny, since the actions of any single person potentially carry ever more weight. As far as intelligence is concerned, the key question of the future seems to be: how smart is your smart assistant allowed to be – not to mention yourself?

A truly smart assistant would be able to synthesize information that is classified on a national level, or infringes upon someone’s privacy – that alone is reason enough to cull the wings of wisdom of any such assistant. Only when it comes to software available to the public, of course. Corporations and intelligence agencies will synthesize away! And only with your best interest in mind, naturally, because just imagine a terrorist whose smart assistant instructs them on how to make a DIY dirty bomb – or makes it for them. In a shattered society where everyone suspects everyone, that’s another good reason to remove at least one hemisphere of your smart assistant’s brain – though preferably even more – and hand it over to the protective hands of the power structures. After all, would you want another person’s smart assistant to snoop after you, or to plan your murder? Of course not. It’s far safer to leave such privileges to those who’ve always enjoyed them.

The problem is much subtler, of course. What if, instead of a bomb, we ask our smart assistant to make a medicine, a vaccine, or a gene cocktail we desperately need? Would Sandoz, Pfizer and the rest look kindly upon that? I don’t doubt our future capacity to make highly personalized medicines at home – and I also don’t doubt it will only occur under strict supervision, curated by corporate interests and patents. All for the sake of your own safety. And how would Microsoft and Apple react to your smart assistant coding a snippet of software heavily inspired by their „intellectual property“, while just barely not infringing on their patent? It makes all the sense in the world for corporations to pay copious amounts of money to ensure that the answers these „helpers“ give are only partial, or strictly slanted in their favor, when it comes to questions that infringe on their interests – better yet, the answers should advance those interests. No wonder then that every giant with deep enough pockets is putting out their very own virtual assistant who, like any courteous adviser, only speaks when spoken to, and in the meantime only perks their ears…

Artificial intelligence is always developed by someone, and that someone also builds the communications channel. Imagine the smartest person in the world sitting in the very next room, yet being able to converse with them only through a messenger. „How to achieve energy independence?“ The messenger leaves for a moment and comes back saying: „I don’t know. But I analyzed your electricity bill – if you join Enefit Friend Plan, you can save up to 10% every month.“ This is, roughly, the future I foresee. Much like what happened to the Internet and its initial promise of unseen freedom: waves that once allowed one to surf now spiral, mesmerizingly, in a tube, a water slide wallpapered with the „surfer’s’ ‘ own preferences and prejudices. The surfer sports nice swimming goggles, and a life jacket, pays the bills on his way down, orders some stuff from the e-shop, e-votes, streams a film or two, and finally lands in a knee-deep puddle. And then for another go! I see the same fate befalling all kinds of „smart“ assistants.

In any case, this seems much more plausible than your smart assistant helping you achieve real energy independence – or any kind of independence, really, if it happens to run counter to the interests of the creators of your „helper“. Heck, it won’t help you chop wood, once burning it has been declared illegal, and will probably call the cops on you when it sees you doing any chopping. Energy independence would culminate in your little helper ordering expensive battery racks, coincidentally featuring the very same logo stamped on its own forehead. For an additional fee, you can purchase a software upgrade allowing your helper to perform maintenance on those racks, or do X, Y, Z. Every skill will come with its own price tag, naturally. Meaning: whoever has more money will be able to afford a smarter assistant.

What does all this mean for education? I don’t doubt such technologies will be warmly received in the (already machine-like) education system, especially in times of ruthless optimization. What could be more effective than a teacher who is able to communicate with thousands of students all at once, while also considering their particular interests, tirelessly, always merrily, inerrantly? Sadly, in an education system that has devolved into a game of Jeopardy, slanted heavily in favor of factual knowledge, kids might as well be taught by chatbots. In fact, they have already found their way into the loop: pupils use them for homework and teachers for grading – in a vicious circle of mutual disinterest. But this is only the beginning – what effect would a far more powerful software have on our education system?

In a future where pretty much anyone can effortlessly know a whole lot, the main function of schools will possibly be teaching how to ask questions. But thinking on from there, in much darker tones: handing out certificates, giving permission to ask certain questions and to get more accurate and detailed answers than those without the certificate – or to give permission to communicate with information systems that haven’t been lobotomized. However science-fictiony such a vision seems, it’s still more plausible than a world where absolutely everyone has free and unrestricted access to these powerful information systems, and where anyone can ask anything. That is, of course they can! After all, we write our own personal dossiers, worth their weight in gold, with such questions – but what sort of answers will any given dossier allow one to get is a whole other matter. No worries, though! Discontentment and gaps in knowledge will be paid off with cheap synthetic paradises and a universal basic income – the bulk of which might well be spent on renting your smart assistant. That you will be renting it, though, is certain. Instead of Elysian fields I see a vast flatland of subscription-based cyber feudalism, ever increasing inequality in both wealth and knowledge.

The modern version of literacy means having the power to gather, store and interpret metadata. What is an unintelligible jumble for 99.9% of people is the basis for very lucrative predictions. This is why everything, from cars and phones to fridges and toasters, are fitted with microphones, cameras and other sensors – all while a vast portion of our daily communication already takes place in corporate channels. There are people who know full well „what the future brings.“ Not on the scale of months or even days. But imagine being able to see even a minute into the future. If you’re clever, a never-ending minute-long advantage is more than enough to control days and months. Aphoristically: who controls one minute controls eternity. So what will the future bring? The future will bring a past that is captured and stored in ever more detail, and thus – a future that is predicted ever more accurately. The future won’t bring too many people with access to this precognition pie – otherwise it wouldn’t be a precognition pie.

OpenAI began in 2015 as a non-profit whose mission is to ensure the transparent development of artificial intelligence. In 2019, after some introspection, they reoriented as a for-profit. Microsoft has since poured over 11 billion dollars into OpenAI and owns the exclusive license for its language model. They know everything you know, and you know nothing they know – how’s that for transparency? This uneven playing field epitomizes our relationship with all „giants“. But don’t worry, you can ask your smart assistant all about it!

„Hey, Alexa, please show me every piece of data Amazon has ever collected on me, including the information synthesized from it.“

„I’m sorry, I don’t understand the question.“

„Hey, Alexa, please tell me if Amazon is using the data it’s gathering to make foolproof deals on the stock market. With this much foreknowledge, isn’t it practically insider trading?“

„I’m sorry, I don’t understand the question.“

„Hey, Alexa, can you show me a Superman film where Superman has my face and voice, and all my friends are supporting actors, and my boss is Lex Luthor?“

„Sure! Here’s a Superman film where Superman has your face and voice, and all your friends are supporting actors, and your boss is Lex Luthor. Please enjoy watching it!“

Translated by Tõnis Tootsen


Wednesday, May 8th at 17:00 Author’s reflection “”A Fool in the Pocket” and other stories from synthetic paradises” by Tõnis Tootsen at the Fahrenheit 451° Bookshop (Kastani 42).

Back