
The Worker and the Spy: A.I. in the Modern Workspace
CUT TO. INT. SHOPFRONT – EARLY MORNING.
It’s still dark out. The staff have gathered in a huddle by the sundries section. We’ve been summoned by the Store Manager. He’s tall, rotund in a sagging way, like a well-used beanbag. His hair is slicked back to excess. He is not handsome, but he does what he can – resulting in mediocrity. That word seems to follow him like a lingering odour. He never works with the morning crew, but he has high expectations. Perhaps he used to be a worker. Whatever the case, he gets to tell us how to do our jobs.
Store Manager
We’re focusing a lot on efficiency these days. We have an app (he holds up a phone). According to the app, there’s enough of you to ensure all the shelves are filled by 8:30am. But we’re missing the app’s quota. Every day.
You can make the quota by packing faster; you need to take the items, force them onto the shelves, then start as quickly as possible on the next crate. Don’t – Stop.
That’s what we need to do to meet the quota. If there’s anything the managers can do to help encourage you to pack faster, (he smiles) let us know.
The crowd disperses. They try, but they do not finish in time.
END SCENE.
Automated prediction. It was the first time I had encountered it in a work environment, and it was making the job harder. It was another manager. An overseer without a voice or body, and yet a presence that was felt by everyone. It expected us to work at a speed it understood best – the speed of no rest, no refuelling, and no sympathy.
The App functioned by calculating the number of crates received at the store and comparing it to the staff roster, deducing an optimal rate of work, then producing a deadline. If we met the deadline, we were doing well. If we did not meet the deadline (and we never did) we were doing badly. That meant the managers could punish us for not working hard enough.
I know. It sounds like something out of a science fiction piece. That entire shift I imagined I had taken one step closer to dystopia. So much of our understanding of the modern world is filtered through the narrative lens of fiction. Writer Isaac Asimov coined words in his stories that today have genuine meaning. ‘Robotics’ is now an interdisciplinary branch of computer science and engineering. ‘Psychohistory’, another word he coined, is a legitimate discipline. The real world is catching up to writers’ ruminations.
What will the everyday work environment look like in the hyper-modern age?
I pose these questions to an engineer at Satis A.I. – a start-up focused on introducing A.I. assistance to one of the most ubiquitous work environs in London: the kitchen.
In particular, Satis A.I. sit at the forefront of the ‘dark kitchen’ industry. These are kitchens run by massive enterprises like Deliveroo or Uber Eats in connection with huge brand names, set up to produce food solely for take-out. Employees will coop up in industrial cookhouses, their expected capacity for service far beyond anything that has come before. The take-out business already brings in near £100 billion every year. Satis A.I. would promise their investors even higher profit margins.
Question Time…
Q. What is your role at Satis A.I.?
A. ‘I work in Computer Vision. We use cameras to detect patterns in repetitive tasks, and we use that information to build greater context for machine intelligence.’
Q. Could you explain the different strands of A.I. – preferably on how Machine Learning, Symbolic Logic, and Robotics differentiate, and how they exhibit intelligence?
A. ‘It largely depends on the methods of teaching. Primarily, you have Machine Learning and Deep Learning. You provide the A.I. data and then it classifies sentences, humans, faces, and patterns. The A.I. ‘learns’, and it quantifies that this video pixel of a certain shape and colour represents a cat, and the other is a dog. But it can only differentiate between those two animals because the answer is rigidly programmed into its brains. If you give the machine enough data, eventually it can and will distinguish between all organic organisms, mammals, fish, humans, their species and their ethnicities.
This is ‘knowledge’… But this isn’t necessarily how human intelligence works. If you taught A.I. only even numbers, how would it ever deduce odd numbers or the concept of zero?
A truly intelligent machine needs to do its own symbolic reasoning to find loopholes in its thinking; that’s how humans discover new things. And this is really the heart of Artificial Intelligence. Symbolic Logic – the other method – is based on formulating debates and mind maps. You have a premise, you connect the dots as far and as logically as possible for the A.I. and come to a conclusion.
We can compare Machine Learning and Symbolic Logic with this example: a cat has four legs. On all fours, my arms may be misconstrued as legs. Therefore, the cat and I in this context are the same. Machine Learning stops there. Symbolically trained A.I. however would realise that most humans don’t have fur, thus cancelling out the possibility that the human is a cat.
This is the problem we have in Machine Learning; it doesn’t consider that ‘commonality’ and ‘identical’ aren’t the same thing. An A.I. using Symbolic Logic may decide that the crawling human is cat-like, but then realise that this creature is too large, and has no fur, therefore disqualifying cats from the possible solutions. It then continues to reason, disqualifying dogs, and possibly coming to the conclusion that it could be a man on all fours.’
––Or a hairless ape.
‘Or a hairless ape. Did you see what your brain just did there? It made a connection between two abstract ideas. The answer to a question in the context of symbolic learning will depend on the individual machine’s logic…’
Q. How do you use A.I. at work?
A. ‘At Satis A.I. we have object detection, which identifies burgers, wraps, and then we have something ‘deeper.’ This is called Tracking. We can track ingredients from the beginning of the cooking process to the end of the cooking process. Our A.I. assistants will pick up on errors in production, packing orders, etc., and give real-time assistance to employees. Such as, ‘You put cheese on a burger, but the customer note declared an intolerance to lactose.’
Q. There is a lot of confusion and fear surrounding artificial intelligence and its impact on labour. Will A.I. make part of the workforce redundant?
A. ‘There’s some reason to be afraid, yes. We have seen A.I. systems perform better than humans in obvious tasks. In law, A.I. can sift through thousands of cases in moments. It can decide on promotions, do accounts…
We have to work on re-skilling the workforce and supporting people who have been replaced due to automation. It’s happening right now. But that’s the government’s job. They aren’t doing an amazing job of it, but should progress only move at the pace of bureaucracy? People will be replaced, but there will always be some things that A.I. can’t do and we should focus on exploiting those areas to make new jobs.
Having said this, we also shouldn’t forget that in 2017, Elon Musk said that self-driving cars would be achieved and ubiquitous by 2019. The Father of Deep Learning, Geoffrey Hinton, said every radiologist will be replaced by A.I. Has that happened yet? No.’
This is a good moment to pause and consider some of the things being said. Some speculate that by 2030, 375 million jobs will be lost worldwide to automation and A.I.
1.5 million or more of those will be in England.
Q. What would you say about privacy? Would you consider the visual data Satis A.I. gathers to be ‘personal’?
A. ‘A.I. requires a huge amount of data to be useful. A big problem is that governments are willing to sell personal data to tech firms for the purpose of producing helpful A.I. that they (bureaucrats and defence specialists) will find useful.
In the case of Satis A.I., we want to produce a generic kitchen A.I. model. We don’t want it to rely on employee information, their name, sexuality, or background. If the restaurant owners were nefarious, they could use our footage to deduce if certain employees were working on certain days where performance wasn’t up to scratch… This isn’t allowed, per se.
Of course, in Big Tech this already happens. You CAN connect physical gestures and appearances to a personal identity. This leads to the tracking of movements of certain individuals, teachers, journalists, political dissenters… whistle-blowers… All you’d need is a few seconds of footage.’
Q. How have employees responded to the cameras?
A. ‘This is interesting. Sometimes, our cameras come with a kitchen display unit, just a generic KDS as used in Nando’s, Burger King, KFC. Our cameras can give real-time tick-boxes and know when you’ve packed food properly. You don’t need to worry about checking the bag again or worry about forgetting which bags are for which order. In this context, the employees have been overwhelmingly pleased. It reduces their cognitive load. Less stress.
When we didn’t give them a display there was a lot of suspicion. People were blocking the cameras or turning them away with mop handles. Our detection system is useless if workers don’t like using it. We need communication! Good communication between man and machine. There are competitors of ours who focus on achieving 100% accuracy, and they do this by putting up upwards of twenty cameras in the kitchen-’
Q. Twenty?
A. ‘Twenty. They capture everything without exception. Imagine being some poor employee trying to work under those conditions? That’s terrifying!’
Q. Is it that hard to keep track of food items?
A. ‘Yes. Sometimes an employee will be making a wrap, and then they’ll take the wrap out of the camera’s viewfinder and the A.I. will say, “I can see you have a wrap in your hands. It’s probably the one you were just working on, but I can’t be sure. I’m 60% sure.” The kitchen is a very complex place. There are things that go into ovens and come out… different. What’s up with that?’
Q. Companies and institutions like Deep Mind have previously been accused of mismanaging A.I. and data. Can you imagine a parallel world where your products are somehow weaponised?
A. ‘There’s nothing to imagine. Object detection is popular right now. It’s already being weaponised. But Satis A.I. has nothing to do with that.’
Q. Finally, what is the future of your company?
A. ‘The future is bright. These employees work on minimum wage, 13-to-14-hour shifts, and are little managers in their own right. What they can do is astounding. It’s mundane work, but extremely complicated in terms of workload and workflow. If our A.I. can help reduce the cognitive load and increase productivity, then we’ll be in business. You know… the human who is given the information gathered by deep learning machines is super-powered. It’s almost like we skipped a problem and created the ideal cyborg.’
Developers have a simple goal. They want to make as much money as possible, and this means only investing resources into areas of Machine Learning that are relevant to businesses today. While it isn’t impossible that Satis A.I. will decide to, with monumental effort, overhaul their programs… they won’t. They don’t need to. Weaponising object detection has already happened. Somebody else is making money studying the same technology in a far more morbid context.
In reality, Satis A.I. will just continue to prevent the wrong burger from going into the wrong paper bag. At the worst, they make some poor burger flipper’s job harder, should they choose to start on Automated Prediction.
CUT TO. INT. RESTAURANT – DAY.
An order appears on the screen in a busy kitchen.
Double patty with extra cheese.
2:59
The timer starts. A young man with pimples on his face throws two meat patties on the grill. Hisss. Steam floats up into his face. Before he can even reach for the cheese, one hundred orders come through in a flash of colours, one hundred different burgers with different specifications and different times… all calculated by A.I.
The task has an overall expected time limit: 25 minutes. The boy freezes, spatula in his sweaty hand. He is thinking about how hard his shift has just become.
PAN UP.
A camera on an electronic arm turns to face him. A dim red bulb shines a fresh new spot on his forehead. While he is contemplating his situation, the camera takes note of all that wasted time.
You may also like
The Curious World of Productivity Apps
I’m typing. If I stop typing for more than five seconds, everything I’ve written will be deleted
What’s in a Selfie?
On March 31st the Saatchi Gallery will launch a new exhibition named ‘From Selfie to Self-Expressi
An Optimist’s Guide to our Robot Future
News about developments in artificial intelligence often stir up fears of robot uprisings, dystopias
Post a comment