Are over-intelligent robots and evil computers confined to dystopian sci-fi, or should today’s businesses be worried about AI?
From the power loom to robotic car assembly plants, technology has significantly impacted jobs and employment. Up to now lower skilled work has been the main victim. Experts now predict machines will be able to perform tasks previously considered ‘beyond’ the ability of automation, including sophisticated analysis and decision making, creating complex ethical and moral dilemmas impacting businesses, their workers and their customers.
Extensive research has been carried out regarding the future of AI (artificial intelligence) in relation to machine learning, digitalisation, automation and augmentation. Many conclude that robots may steal jobs, however, upskilling and retraining will allow many people to migrate into higher grade jobs that are created to manage and oversee AI, to ensure ‘intelligibility’ ie (technical processes remain transparent and explainable). A recent study by CIPD and PA Consulting suggests that automation is creating more interesting and fulfilling jobs.
Exponents of artificial intelligence and machine learning emphasise the goal of assisting humanity, not replacing it. They cite the importance of AI systems in helping to generate insights that are then used by human experts to assist in their decision-making.
Up until recently, for most workers, fears around artificial intelligence and its impact on employee experiences have largely been confined to concerns about losing jobs and changes to work processes through new automation capabilities. However, the spectrum of risk has been widening, fuelled by media coverage of major AI incidents including machine mistakes, bias and potential misuse for criminal and corruptive purposes.
Futurists Richard and Daniel Susskind, in their book The Future of the Professions, have suggested even the work of doctors, teachers, architects, lawyers and accountants may all eventually be automated. These types of role, that require delicate and nuanced decision making, would require increasingly robust checks, measures and controls if ‘taken over’ by AI.
AI is on the rise, but I believe AI ethics will have to take centre stage, as the complexity of machine learning increases beyond human comprehension. Humans will no longer understand all the component parts, so new ways of monitoring, auditing and amending will be needed if businesses are to trust and rely on their artificially intelligent counterparts. An encouraging use of open source tools to help inform AI ethics is a move in the right direction. Similarly, the EU’s GDPR shows how legislation can help make organisations accountable for the decisions made by their algorithms. However, despite an increasing desire by regulators, civil societies and social theorists, to ensure the latest technologies are fair and ethical, the pace of development and lack of international co-operation is passing this responsibility back down to businesses. It seems certain that more organisations will have to set up their own ‘ethics boards’, to ensure they remain in control of AI regarding matters of safety, privacy, bias and transparency. Hopefully organisations will have more success than Google, who only managed to operate their independent council for just over a week before shutting it down, due to controversy over the appointment of anti-LGBT and pro-military board members!
If you enjoyed reading this, have a look at this earlier article: Robot Revolution