By Pam Loch
Professionals working outside the tech industry may not be aware of the extent to which artificial intelligence (AI) – encompassing AI systems, robotics and cognitive tools – is already integrated into our day-to-day experiences and working lives. When a bank calls an account owner about a possible fraud, for example, AI is usually responsible for detecting it.
The professional appetite for AI is developing steadfastly, with new technologies constantly released that can streamline existing processes – from specialized tools for recruitment and onboarding to data analysis, as well as unprecedentedly effective security programs. Hailing the potential benefits of AI needs to be tempered, however, from a legal perspective, with anticipating the kinds of problems these technologies might pose as they become increasingly established in the workplace.
Guidance on devising policies that facilitate harmonious coexistence between AI and human workforces in an organization will become steadily more essential as time goes on. In general, policies relating to AI are expected to relate to two particular areas of concern:
- Injuries and fatalities. Where applicable, organizations should seek guidance on implementing protocols to follow in the event of a work-related accident or fatality which involves AI.
- Job protection. Ensuring that AI is integrated into the organization in a way which complements rather than eclipses the need for human activity.
Ruthless Robotics and Their Sinister Implications
Since its inception, the implications of AI in the workplace have been held in a degree of suspicion by a proportion of the population. Qualms relate especially to their potential to “outsmart” humans, with Stephen Hawking telling the BBC in 2014 that he could foresee humans being “superseded” by AI, as we are “limited by slow biological evolution.”
In a professional rather than a homicidal sense, ensuring their inventions replace the humans who would previously have carried out particular tasks is already a goal of some AI developers. Analytics tool ThoughtSpot, for example, launched in 2012, is a conscious harbinger of the obsolescence of hired talent, billing itself as an opportunity for employers to “stop waiting for custom reports from data experts,” whose performance it reliably outshines.
At the cutting edge of the digital hyperspace, hypothesizing about the potential of AI to outsmart and supplant humans takes on a more sinister overtone. These fears are born out of noticing the ruthlessness that is so often a by-product of AI’s commitment to completing programmed tasks. AI writer Janelle Shane terms this propensity “destructive problem solving.”
Most workplaces are a long way off from implementing technologies of this level of autonomous sophistication. Nevertheless, as we step into the future, it is vital to proactively anticipate and plan around the possibilities of AI going awry in a professional context.
AI and Ethics
The creation of policies that protect for all eventualities engendered by the introduction of AI into the workplace is the subject of current consideration worldwide. “We’re building robots and machines driven by AI, we’re putting them into the workplace, and they are becoming more complex by the month, but our ability to control them in terms of what they do and the decisions they make becomes more limited every day,” says Matthew Linton, a consul of OgletreeDeakins in Denver, USA, who specializes in advising organizations on their use of robotics.
This poses a number of unknowns with respect to policy creation. Many issues surrounding the adoption of AI by workforces and its various risks and benefits are currently in discussion, with the multifarious possible conclusions about correct practice yet to be drawn. Furthermore, the field of AI in the workplace is a landscape with considerable variation between industries.
Policies relating to the use of AI in all sectors will ideally be derived from extensive engagement with specialized knowledge, particularly regarding the specifics of individual organizations and the forms of AI used. They will draw on perspectives from academics, industry professionals, governing bodies and anybody else with learning relevant in each case to building an ethical foundation around the use of AI.
Specifically with regard to possible deaths and accidents that may occur in connection with AI devices, organizations are recommended to take a defensive approach in relation to health and safety. This involves anticipating all eventualities related to problems arising in the use of AI and devising policies that prepare for them, as well as identifying and declaring when they embark on using AI in all areas where the outcome and/or the ideal legal response to it is unknown.
From Productivity and Progress to Privacy Concerns
Despite the hypothetical risks, the current reality is that many different kinds of organizations are already embracing AI and the new possibilities and capabilities it affords. These include increasing the productivity of the human workforce by taking over repetitive tasks, as well as collecting and analyzing data in ways which provide previously inaccessible insights.
For example, new analytics tools enable businesses to make transformative adaptations to business models, based on AI-derived insights into customer browsing habits. A further area in which AI is already in use monitoring the performance of staff, with one company Humanyze in 2011 providing AI tools to organizations in a vast array of different industries to help them track the performance of employees, even recording their speech and movement throughout the day, and providing insights into emotions and stress levels.
Although this is welcomed by some as a means of eliminating unproductive work habits and collecting insights that can be used to boost output and performance, some may consider this level of monitoring invasive. The legal rights to privacy of employees must be clearly delineated for all staff and protected with effective policies, in all instances in which AI is used in this way.
Working with AI, Rather than Against It
Aside from the legal ramifications of possible accidents and fatalities, one of the gravest concerns with respect to AI in the workplace is its propensity to render humans superfluous to the workforce. This is already seen, for example, in instances where chatbots can replace humans in providing customer service solutions. Posing a viable solution in their 2019 assessment of global human capital trends, Deloitte has broached the introduction of the “superjob.” Organizations adopting AI can look into creating jobs in which people skills such as empathy, teamwork, collaboration and listening are integral to the role. This will ensure that, as AI becomes increasingly established, human workers remain indispensable.
When employers work with AI rather than against it, it can also be helpful in troubleshooting problems within the workforce. Early research is already showing that, where AI chatbots are available to staff, professionals feel more comfortable broaching topics such as interpersonal disagreements than they do with human managers. This means that grievances and other legal matters can be raised swiftly and can then be progressed to conclusion in a labour-saving, efficient way. With the help of AI, traditionally fraught processes are thereby absolved of some of their usual emotionality.