14 January 2017

Robot Charter

The Artificial Intelligence report by the European Parliament's Committee on Legal Affairs noted in the preceding post features a draft framework
Definition and classification of 'smart robots'
A common European definition for 'smart' autonomous robots should be established, where appropriate including definitions of its subcategories, taking into consideration the following characteristics:
The capacity to acquire autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and the analysis of those data
The capacity to learn through experience and interaction
The form of the robot’s physical support
The capacity to adapt its behaviours and actions to its environment
Registration of 'smart robots'
For the purposes of traceability and in order to facilitate the implementation of further recommendations, a system of registration of advanced robots should be introduced, based on the criteria established for the classification of robots. The system of registration and the register should be Union-wide, covering the internal market, and should be managed by an EU Agency for Robotics and Artificial Intelligence.
Civil law liability
Any chosen legal solution applied to robots' liability in cases other than those of damage to property should in no way restrict the type or the extent of the damages which may be recovered, nor should it limit the forms of compensation which may be offered to the aggrieved party on the sole grounds that damage is caused by a non-human agent. The future legislative instrument should provide for the application as a rule of strict liability to damage caused by 'smart robots', requiring only proof of a causal link between the harmful behaviour of the robot and the damage suffered by the injured party. An obligatory insurance scheme, which could be based on the obligation of the producer to take out insurance for the autonomous robots it produces, should be established. The insurance system should be supplemented by a fund in order to ensure that damages can be compensated for in cases where no insurance cover exists.
Interoperability, access to code and intellectual property rights
The interoperability of network-connected autonomous robots that interact with each other should be ensured. Access to the source code should be available when needed in order to investigate accidents and damage caused by 'smart robots'.  Criteria for ‘intellectual creation’ for copyrightable works produced by computers or robots should be drawn up.
Disclosure of use of robots and artificial intelligence by undertakings
Undertaking s should be obliged to disclose:
– the number of 'smart robots' they use,
– the savings made in social security contributions through the use of robotics in place of human personnel,
– an evaluation of the amount and proportion of the revenue of the undertaking that results from the use of robotics and artificial intelligence.
The report also features a Charter of Robotics
The proposed code of ethical conduct in the field of robotics will lay the groundwork for the identification, oversight and compliance with fundamental ethical principles from the design and development phase. The framework must be designed in a reflective manner that allows individual adjustments to be made on a case-by-case basis in order to assess whether a given behaviour is right or wrong in a given situation and to take decisions in accordance with a pre-set hierarchy of values. The code should not replace the need to tackle all major legal challenges in this field, but should have a complementary function. It will, rather, facilitate the ethical categorisation of robotics, strengthen the responsible innovation efforts in this field and address public concerns. Special emphasis should be placed on the research and development phases of the relevant technological trajectory (design process, ethics review, audit controls, etc.). It should aim to address the need for compliance by researchers, practitioners, users and designers with ethical standards, but also introduce a procedure for devising a way to resolve the relevant ethical dilemmas and to allow these systems to function in an ethically responsible manner.
The Code of Ethical Conduct for Robotics Engineers has the following Preamble
• The Code of Conduct invites all researchers and designers to act responsibly and with absolute consideration for the need to respect the dignity, privacy and safety of humans.
• The Code asks for close cooperation among all disciplines in order to ensure that robotics research is undertaken in the European Union in a safe, ethical and effective manner.
• The Code of Conduct covers all research and development activities in the field of robotics.
• The Code of Conduct is voluntary and offers a set of general principles and guidelines for actions to be taken by all stakeholders.
• Robotics research funding bodies, research organisations, researchers and ethics committees are encouraged to consider, at the earliest stages, the future implications of the technologies or objects being researched and to develop a culture of responsibility with a view to the challenges and opportunities that may arise in the future.
• Public and private robotics research funding bodies should request that a risk assessment be performed and presented along with each submission of a proposal for funding for robotics research. Such a code should consider humans, not robots, as the responsible agents. Researchers in the field of robotics should commit themselves to the highest ethical and professional conduct and abide by the following principles:
Beneficence – robots should act in the best interests of humans;
Non-maleficence – the doctrine of ‘first, do no harm’, whereby robots should not harm a human;
Autonomy – the capacity to make an informed, un-coerced decision about the terms of interaction with robots;
Justice – fair distribution of the benefits associated with robotics and affordability of homecare and healthcare robots in particular.
Fundamental Rights
Robotics research activities should respect fundamental rights and be conducted in the interests of the well-being of individuals and society in their design, implementation, dissemination and use.
Human dignity – both physical and psychological – is always to be respected.
Precaution
Robotics research activities should be conducted in accordance with the precautionary principle, anticipating potential safety impacts of outcomes and taking due precautions, proportional to the level of protection, while encouraging progress for the benefit of society and the environment.
Inclusiveness
Robotics engineers guarantee transparency and respect for the legitimate right of access to information by all stakeholders. Inclusiveness allows for participation in decision-making processes by all stakeholders involved in or concerned by robotics research activities.
Accountability
Robotics engineers should remain accountable for the social, environmental and human health impacts that robotics may impose on present and future generations.
Safety
Robot designers should consider and respect people’s physical wellbeing, safety, health and rights. A robotics engineer must preserve human wellbeing, while also respecting human rights, and disclose promptly factors that might endanger the public or the environment.
Reversibility
Reversibility, being a necessary condition of controllability, is a fundamental concept when programming robots to behave safely and reliably. A reversibility model tells the robot which actions are reversible and how to reverse them if they are. The ability to undo the last action or a sequence of actions allows users to undo undesired actions and get back to the ‘good’ stage of their work.
Privacy
The right to privacy must always be respected. A robotics engineer should ensure that private information is kept secure and only used appropriately. Moreover, a robotics engineer should guarantee that individuals are not personally identifiable, aside from exceptional circumstances and then only with clear, unambiguous informed consent. Human informed consent should be pursued and obtained prior to any man-machine interaction. As such, robotics designers have a responsibility to develop and follow procedures for valid consent, confidentiality, anonymity, fair treatment and due process. Designers will comply with any requests that any related data be destroyed, and removed from any datasets.
Maximising benefit and minimising harm
Researchers should seek to maximise the benefits of their work at all stages, from inception through to dissemination. Harm to research participants/human subject/an experiment, trial, or study participant or subject must be avoided. Where risks arise as an unavoidable and integral element of the research, robust risk assessment and management protocols should be developed and complied with. Normally, the risk of harm should be no greater than that encountered in ordinary life, i.e. people should not be exposed to risks greater than or additional to those to which they are exposed in their normal lifestyles. The operation of a robotics system should always be based on a thorough risk assessment process, which should be informed by the precautionary and proportionality principles.
The associated Code for Research Ethics Committees (RECs) is
Principles
Independence
The ethics review process should be independent of the research itself. This principle highlights the need to avoid conflicts of interest between researchers and those reviewing the ethics protocol, and between reviewers and organisational governance structures.
Competence
The ethics review process should be conducted by reviewers with appropriate expertise, taking into account the need for careful consideration of the range of membership and ethics-specific training of RECs.
Transparency and accountability The review process should be accountable and open to scrutiny. RECs need to recognise their responsibilities and to be appropriately located within organisational structures that give transparency to the REC operation and procedures to maintain and review standards.
The role of a Research Ethics Committee
A REC is normally responsible for reviewing all research involving human participants conducted by individuals employed within or by the institution concerned; ensuring that ethics review is independent, competent and timely; protecting the dignity, rights and welfare of research participants; considering the safety of the researcher(s); considering the legitimate interests of other stakeholders; making informed judgements of the scientific merit of proposals; and making informed recommendations to the researcher if the proposal is found to be wanting in some respect.
The constitution of a Research Ethics Committee
A REC should normally: be multidisciplinary; include both men and women; be comprised of members with a broad experience of and expertise in the area of robotics research. The appointment mechanism should ensure that the committee members provide an appropriate balance of scientific expertise, philosophical, legal or ethical backgrounds, and lay views, and that they include at least one member with specialist knowledge in ethics, users of specialist health, education or social services where these are the focus of research activities, and individuals with specific methodological expertise relevant to the research they review; and they must be so constituted that conflicts of interest are avoided.
Monitoring
All research organisations should establish appropriate procedures to monitor the conduct of research which has received ethics approval until it is completed, and to ensure continuing review where the research design anticipates possible changes over time that might need to be addressed. Monitoring should be proportionate to the nature and degree of risk associated with the research. Where a REC considers that a monitoring report raises significant concerns about the ethical conduct of the study, it should request a full and detailed account of the research for full ethics review. Where it is judged that a study is being conducted in a way that is unethical, it should consider the withdrawal of its approval and require that the research should be suspended or discontinued.
In relation to Licensing of designers - 
• You should take into account the European values of dignity, freedom and justice before,  during and after the process of design, development and delivery of such technologies including the need not to harm, injure, deceive or exploit (vulnerable) users.
• You should introduce trustworthy system design princi ples across all aspects of a robot’s operation, for both hardware and software design, and for any data processing on or off the platform for security purposes.
• You should introduce privacy by design features so as to ensure that private information is kept secure and only used appropriately.
• You should integrate obvious opt-out mechanisms (kill switches) that should be consistent with reasonable design objectives.
• You should ensure that a robot operates in a way that is in accordance with local, national and international ethical and legal principles.
• You should ensure that the robot’s decision-making steps are amenable to reconstruction and traceability.
• You should ensure that maximal transparency is required in the programming of robotic systems, as well as predictability of robotic behaviour.
• You should analyse the predictability of a human-robot system by considering uncertainty in interpretation and action and possible robotic or human failures.
• You should develop tracing tools at the robot’s design stage. These tools will facilitate accounting and explanation of robotic behaviour, even if limited, at the various levels intended for experts, operators and users.
• You should draw up design and evaluation protocols and join with potential users and stakeholders when evaluating the benefits and risks of robotics, including cognitive, psychological and environmental ones.
• You should ensure that robots are identifiable as robots when interacting with humans.
• You should safeguard the safety and health of those interacting and coming in touch with robotics, given that robots as products should be designed using processes which ensure their safety and security. A robotics engineer must preserve human wellbeing while also respecting human rights and may not deploy a robot without safeguarding the safety, efficacy and reversibility of the operation of the system.
• You should obtain a positive opinion from a Research Ethics Committee before testing a robot in a real environment or involving humans in its design and development procedures.
The Licence for Users is simpler -
• You are permitted to make use of a robot without risk or fear of physical or psychological harm.
• You should have the right to expect a robot to perform any task for which it has been explicit ly designed.
• You should be aware that any robot may have perceptual, cognitive and actuation limitations.
• You should respect human frailty, both physical and psychological, and the emotional needs of humans.
• You should take the privacy rights of indi viduals into consideration, including the deactivation of video monitors during intimate procedures.
• You are not permitted to collect, use or disclose personal information without the explicit consent of the data subject.
• You are not permitted to use a robot in any way that contravenes ethical or legal principles and standards.
• You are not permitted to modify any robot to enable it to function as a weapon.