1 / 2

is-cobotics-safe

Whilst physical barriers make it easier to provide guarantees about the machineu2019s safety, to realise the potential of cobotics, autonomous systems should operate within an open, or unconstrained, environment.

GMIS
Download Presentation

is-cobotics-safe

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Is Cobotics Safe? Whilst physical barriers make it easier to provide guarantees about the machine’s safety, to realise the potential of cobotics, autonomous systems should operate within an open, or unconstrained, environment. However, the more interaction the autonomous machine has with other things, the harder safety assurance becomes to achieve. Thus, more subtle forms of safety mechanisms are needed. TEXT: The emerging paradigm of the Industrial Internet of Things (IIoT) is changing factories into entirely new ecosystems. Machines are not only present on the factory floor, but are now beginning to outnumber workers, enabling greater cost and time efficiency and higher quality produce. However, this is creating unchartered territory regarding safety risks. As our relationship with machines becomes closer and more dependent, our existing safety assurance practice and regulatory frameworks are no longer fit for purpose. The fast-changing nature of manufacturing requires safety and security guidelines to be rethought in order to protect both the worker, and the technology. To update the guidelines necessitates an understanding of the issues and the direction of change, yet there is a dearth of information, research or evidence to help guide the evolution of new governance frameworks. To fill this gap, an open, multi-stakeholder project has been commissioned by the Global Manufacturing & Industrialisation Summit (GMIS) and Lloyd’s Register Foundation (LRF), and undertaken by Policy Links (Institute for Manufacturing, University of Cambridge) to improve safety and security knowledge of 4IR technologies in manufacturing. By collecting and reviewing international evidence, the project aims to deliver recommendations for pilot projects that will help improve safety and security in manufacturing across the world. As an extension of the project, a briefing paper authored by Richard Hawkins and John McDermidof the Assuring Autonomy International Programme at the University of York, goes into more detail, exploring the challenges associated with safety assurances. The authors analysing the international evidence found that accepted safety assurance regulations and guidelines are no longer fit for purpose as they do not take account of the growing safety implications of machines connected to a network. The paper recommends further research on the implications of 4IR in manufacturing is undertaken to ensure that at the most fundamental level, safety can be assured. They believe that to do so, all safety assurances, standards and regulations should include: - A clear definition of how the system must behave during all circumstances - Implementation of a system that guides this behaviour and a method for recording the behaviour - A full understanding of the safety risks involved and demonstration of mitigation techniques to stop these things from happening However, each of these aspects is challenging in its own right. For example, many of the tasks that are being automated in factories are complex. This makes it very difficult to describe precisely how the machine must behave in every possible situation it might encounter. Machine learning (ML) allows the ability for a machine to learn how to behave through training under

  2. close supervision, or by using a simulation. Yet, in either circumstance, a key safety risk in training the machine is the integrity of the software and the accuracy of what the machine can learn. Whilst close monitoring and analysis ensures the machine is operating within the set parameters, data management for ML becomes a challenge within the wider issue. Too little data and the machine will not learn the task accurately, too much and the robot will “overfit”, meaning that whilst ‘the robot is very good at dealing with situations it has encountered during training, it is unable to safely adjust its behaviour to new situations’, according to the briefing paper. In addition, a widely cited paper on “concrete problems of AI”describes the notion of “reward hacking”, which means that ML can elect to meet some objectives in learning behaviour while ignoring other desirable behaviour. For instance, if a cobot learns to move on the shortest path between two points (optimising time and energy usage) but ignores the need to avoid conflicts with the worker, it is potentially unsafe to be in close proximity. Whilst rare, ‘the complexity and opacity of the learning process’ make it difficult to ensure “reward hacking” has not occurred. In addition, increasing autonomy means human decision-making is transferred to machines. This transference of responsibility raises a number of challenges when assuring the safety of the operations. For example, decisions made by human workers are often based on the judgement, experience and instinct of the operator, to ensure a safe outcome. Decisions made by a machine without these contextual elements may lead to differing behaviour of the machine during operation. Without these challenges settled, the safety risk of working in close proximity with a machine increases. It is also important to consider if the machine will stop quickly enough to prevent harm to the worker if something goes wrong. Whilst physical barriers make it easier to provide guarantees about the machine’s safety, to realise the potential of cobotics, autonomous systems should operate within an open, or unconstrained, environment. However, the more interaction the autonomous machine has with other things, the harder safety assurance becomes to achieve. Thus, more subtle forms of safety mechanisms are needed.

More Related