Analysis and Evaluation

Literature review on ‘Machine Ethics: Creating an ethical Intelligent Agent by
Anderson and Anderson’
Machine ethics is a concept of implying ethics in the use of IT based applications and devices
that can work as agents. The concept of machine ethics is to educate ethics in machines so
that they can behave appropriately towards human beings or other machines. The literature
review briefly studies the concept of machine ethics when artificial agents are created. These
agents receive messages from the environment and interpret it according to the programmed
actions and values to deliver results or response. Artificial agents are of various types. The
simplest artificial agent is simple reflex action in robotics.
Types of machine ethics
According to Michael Anderson’s research, machine ethics are of two types i.e. explicit
ethics and implicit ethics. The explicit machine is when it reacts ethically after shuffling and
analyzing factors from external environment and implicit ethics is one that has been
programmed to behave ethically. The issue of today’s artificial intelligence concept is that
machines are needed to be programmed for ethical behavior in unexpected situations too.
Artificial agents such as robots need ethical sensors in their built system to decide ethically in
a situation. Ethical sense creation acts like humans and make decisions according to values
infused in machine at the time of manufacturing. There are some challenges in creation of
ethical artificial agents. The first challenge is to understand the purpose of creating artificial
intelligence. The challenge is prone to creators so that they can thoroughly research the
rationale and work towards potential remedies. The second challenge is to understand the
philosophy behind ethics incorporation in artificial agents so that users can gain maximum
benefit from it (Anderson M. &Anderson S., 2007,[1]). For example, the truth telling
machines use sensors and technical connections to person’s brain that stimulates sense of
confidence and a sense of shame and under confidence while speaking lies. These truth
telling machines bring convenience to people who want to seek out truth through technical
means. As technical means are considered more authentic and reliable for producing correct
results, it is a good idea to work more on ethical building of such machines and seek out
maximum benefits from artificial intelligence agents. There are few steps to create ethical
artificial agents.
Steps for ethical inducement in artificial agents
The first step is to infuse prima facie duty theories in machines. Based on the findings of Ross,
Prima facie duty is able to recognize that the individuals are faced with moral choices on a daily
basis where duties are either performed or weighted against one another (Ross, 2008). However,
this will only be possible if we use our intuitive judgement. Due to the dissatisfaction with
Utilitarian idea, Ross developed a theory based on prima facie duties which states that there is a
possibility of reducing morality to a single question with regard to the capability of maximizing
utility. Moreover, Ross has also classified these duties into six categories, which are known to be
gratitude, justice, beneficence, non-maleficence, fidelity and reparation and self-improvement.
prima facie duties can be related to Machine ethics based on its ability to capture ethics that a
machine would require to behave ethically in a particular domain. prima facie duties can be related
to Machine ethics based on its ability to capture ethics that a machine would require to behave
ethically in a particular domain.
These theories are based upon concept of ethics and basic human values. The second step is to
establish a domain name for artificial agent that visualizes the theoretical values infused by the
manufacturer. The third step is the development of decision making process in the agent on the
basis of ethical theories. Step four is the building of required algorithm in artificial agent. Fifth step
is the creation of specific prototype that gives commands to the agent for behaving ethically. In the
last step, testing is done to analyze performance of infused ethical theory and sufficient
functionality of algorithms and commands.
Banafesh et al. also proposes an ethical classifier for artificial agents to learn ethical decision
making skill and perform according to human norms and values. The research highlights the
negotiation factor in ethical agreements. The bilateral ethical agent learns ethical norms to enable
agent for ethical behavior. In machines too, ethical agents are learnt through negotiation of certain
theories and multiple sensory networks that can together comprise an ethical artificial agent.
Genetic algorithm (GA) and multilayer protection (MLP) are the two ways proposed by researcher
to predict unethical or ethical behavior of artificial agents (Rekabadar, 2012). The concept of using
these algorithms was efficient but needed more time and cost by investor. Thus, the negotiation
concept also becomes costly for consumers to use in creation of ethical artificial agent.
Roborights
Creation of artificial agents needs technical expertise. However, implications and
interventions of ethics need understanding of moral values along with technical support to
make a machine ethical. The ethical creation of artificial agent is influenced by “Roborights”.
Roborights is a term that describes the ethical process that should be practiced while creating,
treating and using robots. Robots are the main artificial agents that are expected to perform
ethically in situations. In some situations, creation of ethical robots is needed to replace
presence of human beings. Therefore, Roborights is a mandatory issue that needs to be
considered while creation. The first issue of ethics in creation of robots as artificial agent is
“threat to privacy”. Robots are programmed to understand various languages. Particularly,
English language is the primary language that is programmed to robots so that they can
perform different functions. Doing ordinary human chores such as reading a text message,
responding to email, attending phone calls, and etc. all needs understanding of English
language. This understanding also poses a threat to privacy of human beings. In exceptional
cases, robots are expected to avoid being a participant of a situation which is considered
private by human beings. The creator of robot as artificial agent needs to be careful about this
factor and implicate ethics to confine the participation of robots in situations.
Another issue of ethics in robotics as artificial agent is “threat to human dignity”. This
issue state that artificial agents should not be appointed on positions that can influence human
dignity and respect. For example, a robot would respond as customer care representative by
utilizing its programmed responses and versions. This may not be suitable in all customer
care problems and situations. Some situations would demand respect, honor and dignity
perseverance of human beings that a robot will not understand. It is a challenge to creators of
artificial agents to make robots capable of understanding ethical demands of situation and
provide appropriate respect to customers. The third ethical issue is the demand of
transparency in creation of artificial agent. The creator must make creation efforts transparent
that can provide required information to buyer and users of artificial agents. It is an ethical
obligation of creator to provide all necessary details to user so that agent can be used for
required purpose.
Ethical issues in creation of ethical agent
There are various moral and ethical issues in creation of artificial agents. Human beings
criticize that machines should not be a replacement for human presence. The determination of
moral and ethical values in artificial agents should be according to certain human needs only.
The ethical issue pertains to level of understanding where human mind can work. The first
issue is the demand for high class autonomy and complex neural system that cannot be made
by human beings. Robots and other artificial agents are functional only to an extent. Above
that extent, human beings are needed to perform their duty. The state diagram concepts are
again needed to be redefined in robots creation. Artificial agent creation needs technical
expertise and heavy investment (Pana, 2006). Ethics demand that these agents should be
invested only to be a safety solution and a helping hand to human beings rather than
replacing important people.
Nick Bostrom, a professor from Harvard University, claim that artificially intelligent
agents are soon to be capable enough to extinct human presence which is not the desired
outcome of all efforts on AI creation. Machine ethics should take care of this issue and
confine the creation to only support and assistance to human beings rather than replacing
them.
Machine ethics expects machines to behave ethically towards their users and other
machines. A programmed artificial agent should imply ethics while treating with users.
Databases and storage places of an artificial agent are expected to reveal all results to user
when given a command. The need for transparent data management system in those agents is
a part of machine ethics policy. Nowadays, due to advancement in artificial intelligence,
companies have started empowering machines for decision making process than humans.
This puts a big responsibility on creators of machines to infuse ethics in machines so that
they can prevent people from losses. Morals of machines are also needed to be abiding by
their users. A user must know how to use a machine in appropriate way before expecting
ethics from machines. For example, a robot must not be given an order to do something out
of its capability and demanding obedience and ethical behavior from robot. Artificial
intelligence impose obligation on creators to design agents in such a way that can send right
message to actuators. For example, a drone plane should be programmed to avoid hitting a
house in which there is signal of human presence. If the creator does not build such
commands in plane’s instructions’ panel, people cannot expect ethical behavior from it.
Machine ethics are designed to make machines obey user’s command in all circumstances. It
is considered an ethical obligation for machines to be obedient to their user under the concept
of artificial intelligence. Machine users expect machines to protect, obey and prevent user
from any harmful situation and take immediate actions. For example, fighter robots are
created in such a way that they can sense danger to their owner and take immediate action to
activate their hidden powers. Creation in artificial intelligence is often clashed with societal
ethics and norms. For example, an artificial agent cannot sense mimicry; it can only sense
unethical behavior of other person and may attack on him aggressively. Experimental stage
of creation of artificial agent is necessary to testify required ethics in the agent and make
amendments in basic structure of ethical machine.
Challenges to machine ethics
Machine ethics is a challenge for artificial agent creators. The ethical submission in machines
attempt to equalize machine with humans. The battle between human brain, sensory capability and
technical strength of artificial agent creators is tough because of demands of ethical theory
programming. Artificial moral agency (AMA) is an effort to incorporate concept of ethics and
virtues in machines and enable them through computing technology to become ethical decision
maker. Kantian ethics is an integrated concept of AMA, Kantian ethics is an example of
deontological ethical theory, which was introduced by German philosopher named Immanuel Kant.
According to this theory the rightness or wrongness of actions does not depend on their results,
rather it depends on whether they are able to fulfil our duty. Furthermore, it came into existence
due to the enlightenment rationalization, which is based on the concept that goodwill is the only
good thing intrinsically (Kant, 2004). However, the ultimate principle behind this approach is a
duty to the moral law. The categorical imperative is central to the construction of moral law and it
is applicable to all individuals regardless of their desires or interests. Additionally, Kant also made
a distinction between the perfect and imperfect duties. Besides, Kantian ethics is also having a
relationship with Machine ethics in such a way that it appears to be promising and provides
assistance in making decisions due to the computational structure of their judgements
The Kantian approach needs understanding of ethical norms in machines. According to this
theory, machine cannot be under constant monitoring and supervision all the time. To make it
useful and profitable for human beings, it is needed to be capable of ethical decision making
process. Kantian theory demands that programmer of machine should use computing techniques to
educate a machine for differentiating between ethical and non-ethical approach. Machine ethics
maxim (MEM) is the first and foremost technique used to achieve this purpose. The maxim sets up
standard norms and values of moral agents in a machine and testifies machine’s ethical ability to
judge the situation. For example, to testify lie, moral agent would prompt to machine to act in
denial and rejection and demand for truthfulness. Unfortunately, the maxim failed to succeed in its
purpose. There are numerous situations in which human mind can act in different fabricated ways
that can dodge a machine. This does not enable machine to act according to the programmed norms
and ethics and disperse before human reaction. Thus, artificial moral agency and moral ethical
maxim did not produce any noticeable outcomes till date to create ethical artificial agent (Tonkens,
2007).
Technical insight of creation of artificial agent
The technical insight of creating ethical machine is complex and difficult for creators.
Creators have to read, analyze and testify each part of machine before finalizing its final
design and internal system. It takes intense research and development activity to design an
artificial agent. Moreover, creating an ethical artificial agent needs more work on sensory
development and ethical understanding to be captured by a machine. Machine ethics is a
revolutionary idea in the field of artificial intelligence and need thorough effort and research.
However, creating ethics in machines and artificial agent is only meant to assist the positive
use of those agents. It is not meant to extinct the need for human presence or to replace it in
any way.
How the ideas in this papers by Anderson and Anderson can be applied to practical
questions of ethics involving medical machines.
1. should medical machines be programmed to follow a code of medical
ethics?
2. Should machines share responsibility with humans for the ethical
consequences of medical actions?
It can be a potent practical challenge, for designing medical machines in such a way that it
follows ethics. An ethical rational is actually the work of superior human mind. The machine
can act ‘explicitly’ where as the ethical rationale is an implicit component. It is argued that
there could be serious consequences if machines are designed ethically. According to
research by China, there has already been an intelligent human robot that can understand
‘emotions’ which is part of ethics. Also, in South Korea, there is research for expanding
robotic beings in homes for errands. Another view is of that, machines can pose a threat in
future on the name of ethics, in which human life could be subjugated- as depicted in Matrix
movie. The first part of argument is related to dilemmas in course of ethical theory, which
indicates that machines cannot ever interpret philosophical realms. Medical machines need to
be programmed by an ethical individual, rather than being in built ethically. That is the
concept behind explicit and implicit natures. Hence, there needs to be presence of ethical
agents so that it can follow set rules of ethics. It is known that now human beings have driven
away from ethical morals, so can machines. That is why; one author is of a notion that
machines can follow ‘standardized’ ethical rules, not subject to variability. Since, humans
have some grounded animalistic qualities that motivates the person to find sustenance and
source of survival, machines will be devoid of such background and can perform without
competition, hence without threat and fear of competition (Peter, 2013).
Moreover, it is rather hard to calculate the feasibility of ethics conversion and computation
in discrete forms. The action is then transferred to the notion that act will be of greatest
ethical stature that results in gross net pleasure. According to theory of utilitarianism,
machines can act ethically because it results in ‘good’ results and this will not show any
reduction since machine are less to make an error. Moreover, there will be not discrimination,
as humans are more likely to adhere to their near kin members, hence greater quality can be
shared. Also, human beings can think one aspect at a time, and hence can reduce the
effectiveness of a single action. So machines can outperform well by all means.
However, there is a complex issue which concerns with deeper emotions and empathy.
This emotional quotient is lacked by the machines; these emotions are also the culprit where
humans are prompted to also take immoral steps in their lives, which can also mean ethical
consequences. It also has to be judged if there is single correct action for a reaction. In such
case, ethical relativism is often negated. It will be as such that different nations, countries and
cultures would have their own ethically designed machines and then continue to revamp the
machines along with generation swift, which seems costly and impractical due to emotional
sense hindrance. Moreover, in a social depressed economy, investing in machines is not an
appropriate way to utilize rare health resources, of nurses, care sources and care givers. There
is also an economic criticism regarding solely handling costs with machine intelligence.
According to medical ethics, there has to be respect and reverence for autonomy,
benevolence and beneficence. If a treatment is suggested to patient that requires machine
intervention, and the patient refuses for that option, then the human consultant needs to either
agree with patient or just try to convince the patient that it is good for him. This is a health
care dilemma that involves unnecessary autonomy. In this case, either the human consultant
accepts or convinces and program accordingly. Hence, it can be suggested, that machines and
human consultancy can work together. It is to know that the MedEthEx is one system that
helped to solve a medical problem, the action is to record ethical questions, convert them into
questions and dispatch profiles for system validation which expects a justification. This
justification part should then be handled by the human consultant exclusively. Hence, the
machine requires human focus, of a doctor or health care professional to feed in ethical
questions, which involves the exact time of medication, exact ampoule of medication,
number of hours for medication to stay and likely. Then the system judges if there is any
violation or any stone left unturned for any fulfilling of medical duties. This range of duty
satisfaction is then displayed and suggests for an action. When the patient doesn’t accept the
decision, the machine then notifies and sometimes doesn’t notify. Hence, the overseer or the
doctor is told accordingly. That is the best example of acting like an explicit ethical agent,
involving the implicit factors of the doctor’s point of views. Thus, it is very important to test
new generation technology which Is wearable, voice oriented and action based that can also
have some interfaces that resemble human features and unique gestures along with motion
understanding abilities. If machines have ethical notions embedded with Emotient and new
generation technology like that of , perceptual computing and imaging, can rather improve
medical processes. The new range of thermal touch technology has taken out a new formula
of involving doctors and health staff in regard to machines. This concept is solely based on
Augmented Reality that can help technologies of Thermal Touch to interact and rise in order
to manifest with human interaction. The recent inventions also include telemedicine and
facilities of telehealth. However, the machines can only be used in due regard and important
with professionals. The presence of such health care systems cannot be denied. There have
been cases where statistical systems have acted like sole safety valves and acted as
guaranteed systems, but these should only be used as decision support systems and nothing
greater than that. After the analysis of ‘ Ethics for Artificial Intelligence’ it can be aid that
sensitive aspects of ethics and morality have indicated mixed feelings, since all medical and
professional decisions are based on evaluation and this what machines cannot do (Michalski,
2013). Hence, the evaluation of machine performance is evaluative and can only be judged
by the ‘care givers’. It is not solely about handling machines, but also about handling human
resources in due regard and ethical stability.
Argument Analysis on Anderson and Anderson paper
Machine ethics is considered to be a promising field, one that would lead to a better
understanding of ethics. It is also argued that it will lead to further discovery of the impact of
ethics, and the different aspects that constitute ethics as well. The article about machine
ethics shows great promise, along with the strides that have been made in this field and this
sector. It highlights what the future holds for this aspect of ethics, and what it is bound to
bring to the table should everything go as planned. It is a tricky aspect of ethics, especially
since it involves the creation of something totally new, as well as taking a different aspect,
one that has never been considered before. Initially, ethics was centralized on the behaviour
of human beings. Machine ethics takes a different turn, seeking to have machines perform
their different tasks in an ethical manner, or at least avoid doing them in an unethical manner.
There is still a long way to go when it comes to machine ethics, but there is great promise in
this field and it clearly something to look forward to in future.
Machine ethics is defined in this article as a field that is focused on adding an ethical
dimension to machines. It is a field that is totally different from computer ethics in the sense
that computer ethics solely focuses on an individual’s interaction with machines and the
ethics behind the same (Anderson M. &Anderson S., 2007[2]). Machine ethics solely dwells
on machines, with no human interaction at all. It gets to point out that the machines function
or perform what it is that they are meant to do, in a manner that is ethical and acceptable. It is
important to note that the machine may be interacting with other machines or even with
humans, but in the end, there is no human interference on the way it performs its functions.
The machine gets to make ethical decisions on its own and gets to execute the tasks that it is
meant to perform in an ethical manner (Muehlhauser&Helm, 2012).
Advantages of Machine Ethics
As mentioned earlier, it is argued that there are lots of benefits that can be associated with
machine ethics as of now. For one, this is a new frontier in the field of ethics. There is a
possibility that it could lead to further unravelling of different aspects of ethics, and also
possibly lead to people having a better understanding. It is a key aspect, something that will
guarantee the future of ethics and the future efficiency of the same in the different areas
where it is applied and utilised. It is noted that the world is fast improving and changing, and
technology is playing a critical role in the same. Technology has taken the centre stage in the
progress of the world, and it does make a contribution in close to all the activities in which
people get to engage on a day to day basis (Anderson M. &Anderson S., 2007[2]).
Machine ethics will get to facilitate artificial intelligence and the role that machines have
and will have in people’s day to day activities. It is noted that reliance on machines is
growing by the day. It may not be easy to get them to do the right thing in the right way,
something that has already proven to be a huge task. However, machine ethics serves to allow
for these machines to perform what they are intended to perform in a manner that is
considered to be ethical to the average person. Adding this aspect to the normal functioning
of machines would be a milestone, both in the field of ethics and the field of artificial
intelligence. Should it go through, ethicists stand to gain so much from the same. They stand
to learn a lot from the functioning of these machines, and it is possible that they would get to
have clearer and more reliable data and results from the use of the same in the field
(Muehlhauser&Helm, 2012).
The Downside to Machine Ethics
Up until now, ethics has been associated with man’s interaction with others as well as his
interaction with machines. There has never been the aspect of the way machines perform
their business and an ethical angle to their day to day functioning. This makes the field of
machine ethics a very new one and one that by itself suggests a lot more of how all that is
rational than what it would offer to the field of ethics and add value to people’s lives. It is
known and appreciated that machines can be programmed to do anything, which is a good
thing since the argument supports the idea of making the machines perform in an ethical
manner. However, in the end, they are still machines, and they could fail to perform as
intended and nothing will get to stop them from what they are doing. It is a constant fear that
people have always had a major contributor to people having their reservations towards the
use of robots in their households or even in their cars (Muehlhauser&Helm, 2012).
There still is a long way to go for machine ethics. People are yet to appreciate that the
same is practical and usable in real life. The possibilities are endless, and some of them
happen to be promising. However, human ethics has not yet been exhausted for the ethicists
to move on to machine ethics. These individuals argue that machine ethics would further get
to contribute positively and offer a better understanding of the normal ethics that people have
come to know. This goes to mean that human ethics has not been exhausted by itself. Also,
even with machine ethics facilitating the understanding of human ethics, the programming
would have to be done according to what people understand ethics to be. It would only serve
to be an extension of what people understand ethics to be. It will not offer anything new to
the table which brings the argument of whether it is important or not (Anderson M.
&Anderson S., 2007[2]).
Conclusions
Machine ethics has come at the right time with the world appreciating artificial
intelligence and choosing to utilize the same in their day to day activities. This is a milestone,
both in the field of technology and the field of ethics. However, it is important to appreciate
that there are still lots of aspects that are yet to be addressed by the same. There is still a long
way to go when it comes to the usage and the adoption of machine ethics in the field. It is
achievable, however, and that only serves as motivation for the involved parties to continue
working.
A reaction to Ray Kurzweil’s The Singularity Is Near,
and McDermott’s critique of
Kurzweil
By Ben Goertzel
Critical Analysis
This paper aims at analyzing the research done in artificial intelligence. Artificial intelligence
refers as the intelligent, exhibited by machines of software programs. There are numerous AI
researchers in the field of artificial intelligence. The AI is defined as the field of studying and
developing machines that are intelligent (Raessens & Goldstein, 2011). The core emphasis will
be to critically analyze the article written by Ben Goertzel about the artificial intelligence.
The article is about the rising interest of the researchers in artificial intelligence. There have
been many AI researchers who have been over-ambitious and their attitude has led to many
failures. However, the author has indicated in the article that the field of AI has huge scope if
the researchers do not be over-ambitious. In this article, the book “The Singularity Is Near”
written by Ray Kurzweil’s has been taken the base and hence been analyzed. The book written
by Kurzweil has explained one particular scenario about the artificial intelligence. In this
scenario, the human-level intelligence comes through human brain emulation.
AGI or Artificial General Intelligence is considered to have a great future in 21st century if
and only if it is taken seriously. But there is an urgent need to shed the over-ambitious nature
in this area by the AI researchers. They should adopt a narrow approach to deal with the
situation. Though the future of AIG cannot be predicted but there is a dire need to adopt
concepts such as scenario analysis to be successful in the future.
The article has discussed another scenario in which human-level intelligence can also come
through non-human artificial intelligence operating virtual world. The field of artificial
intelligence has huge hopes from AI researchers and this article tends to discuss the same. But
this enthusiasm of the AI professionals has failed time and again due to which they have faced
disappointments multiple times. The hopes are concerned with the invention of new concepts
and processes in the artificial intelligence field. However, the research been carried in the field
of AI has advanced with time (Goertzel, 2007). The vision originally developed by the AI
professionals has faded over time.
This is true due to many factors such as over optimism of the early AI researchers who
promised to deliver more than what could be achieved, the rising frequency of the failures of
the AI researchers and an in-depth understanding of the computational and conceptual
difficulties. However, the research and development is carried in today’s world by researchers
under the label “AI” is done in narrowly defined domain. This is to ensure that the overoptimism of the researchers is not reflected leading to dampening of the field (Goertzel, 2007).
However, the author states that the over-ambitious goals for the artificial intelligence have
hugely increased. This is evident by the increasing number of workshops and seminars on AI.
The list is given below.




Integrated intelligent capabilities: It was a workshop about the integrated intelligence
skills.
Roadmap to human level intelligence: It was a workshop conducted to discuss the
future of the human level intelligence.
Building and assessing human level intelligence: It was a seminar to develop and assess
human level intelligence.
AGI: It was a workshop about discussion of different concepts of artificial general
intelligence.
However, the author states that Kurzweil in his book has considered the research done in
AI as narrow. This means that the research focus on the development of software programs
which can help to solve specific and narrow constrained problems. As per the book, a narrow
AI program does not require in-depth information what should be done or doing and also it is
not needed to generalize what it has learned (Goertzel, 2007).
The Emergence of AGI
Kurzweil in his book has differentiated narrow AI with strong AI. However, the author has found it
confusing since it coincides with the term AI being used by the Searle rapidly. Hence, the term used by
the author is AGI which is an abbreviation for Artificial General Intelligence. This is opposite of the
narrow AI used by Kurzweil in his book. The author has indicated in the article that the AI professionals
should base their studies on AGI. Also, there is likely to many AI research experiments to be taking
place in near future which will be proving to be highly successful (Goertzel, 2007). The author indicates
in the article that the future of the AGI is expected to be a hard and complicated issue since we cannot
claim what exactly will happen. Hence, he has stated that one methodology which could be used to
adapt to the problems is scenario analysis. This is developed by Shell Oil planning pundits in 1970s,
but now it is expanded to a general methodology (Karnal and Kumar, 1998). The author indicates that
scenario analysis is widely used to cope with the various critical real world situations.
The Concept of Scenario Analysis
In my view, the scenario analysis has a basic and a simple idea. This is used by researchers to lay out
a series of steps. For instance, the concept of scenario analysis can be effectively applied to the South
African political situation during abolishment of apartheid. It included various steps such as organizing
meeting to discuss range of parties such as black anti-apartheid activities, government, etc. So there
was a series of steps followed which were specific to the situation (Gillies, 1996). The goal of the team
was to use the information to come up with different possible scenarios. These scenarios can then be
assessed in order to take relevant solutions.
This is a way which is considered excellent to confront the issue of Artificial General Intelligence
and its association with technology. The process of scenario analysis is simple since one would look
for a series of meeting which involve different parties like technology professionals, AGI researchers
etc. The key goal would be to come up with different scenarios (Goertzel, 2007).
The author has recommended two plausible scenarios regarding the future of AGI as steady
incremental progress scenario and dead-end scenario. The first is steady incremental progress scenario
in which narrow AI research continues to increment and gradually becomes less and less narrow. On
the other hand, dead-end scenarios are those in which narrow AI research leads various domain specific
successes. It can be indicated that the concept of scenario analysis has a huge scope in the near future
of the AGI (Goertzel, 2007). A journal published in 2006 had a review of Drew McDermott about the
statement of Kurzweil about the over-optimism in AI. This focused very much on the Singularity is
near. Drew McDermott raised a point that Kurzweil did not provide any proof for his statement that
why AI- driven singularity is upon us. The Kurzweil has not provided any proof but he just provides
the likelihood. But the author indicated that Kurzweil did not undervalue the uncertainty involved to
predict the future of the open systems such as human societies. The human mind has tendency and
hence it has been researched and documented well in the cognitive psychology.
The Concept of Virtual Embodiment
In the next section of the article, the author has presented a different view about developing and
teaching AGI than what Kurzweil has revealed in his book. It has been discussed that the approach is
based on the virtual embodiment. The concept of necessity for embodiment in AI is traditional. Many
AI researchers have their minds fallen on both sides of the debate but the author has his view point
somewhere in the middle (Goertzel, 2007). The author takes embodiment as very useful but not strict
or necessary. The AI researchers must spend most of their time working with the virtual embodiments
in digital simulation in spite of robots. It can be highly beneficial for them and their research work. The
research carried out by author has also involved the relation between artificial intelligence learning
systems and virtual agents.
This process of virtual embodiment is not new but has been quite older and can be tracked to classic
SHRDLU system of Winograd’s. But with evolution technology, there is a long way to go in this
concept of virtual embodiment. The author has plans to carry a project in next few years in which there
will be a virtual-talking parrot. He plans to develop millions of such parrots spread across different
online world that tend to communicate in simple English. All parrots will have their own memories but
one thing will be common i.e. the common knowledge of English.
Besides this, the other thing which the author has indicated in the article is the possibility of providing
disambiguation of linguistic construct. As far as the virtual parrot discussed is concerned, the test of
whether it has used the language correctly or not will depend on whether he has been given he wanted
by his human friends. For instance, if the parrot asks for food, definitely it is likely that it might get
food but the parrot is programmed to want food so that he will be motivated to speak the English
correctly. Therefore, linguistic construct is an important component in the field of AIG. It holds huge
importance.
However, the author has not assured about the possibility of virtual parrots. But he still feels
confident that it may or may not happen depending upon the science and technology concepts available.
Being highly hopefully but not over-optimistic is required here. AIG has miles to go and can be achieved
through one or more plausible way. There is a need that this field should be taken seriously by the
artificial intelligence researchers. Also, they should adopt specific approaches so as to be confident
about their success.
Therefore, it can be concluded here that developing AIG is a huge and difficult goal but if taken
seriously, it is achievable. There have been many sound arguments given by AI researchers that AIG is
achievable within our lifetime through different plausible ways. And hence it can lead to many
remarkable things for future.
How the ideas in this papers by Ben Goertzel can be applied to practical questions of
ethics involving medical machines?
1. should medical machines be programmed to follow a code of medical
ethics?
2. Should machines share responsibility with humans for the ethical
consequences of medical actions?
Artificial intelligence is one of the major fields of today that shows a promising future.
However, with respect to various advancements in the field, there are some serious issues as
well that cannot be neglected. For example, ethical issues can always remain in such a situation
where machines are involved and are allowed to take necessary action without any
supervision of human (Poole & Mackworth, 2010).
In order to understand the context of this paper, it is important to consider that one of the
major areas where ethical issues may arise with respect to machines is a health care setting.
Health care setting is a sensitive area where every action counts and detailed analysis should
be performed prior to any action. In case of such intelligent machines, it is very unpredictable
to consider whether the machine will be able to perform in the required manner or not.
On the other hand, field of Artificial Intelligence (AI) ensures that machines, which are now
developed, are capable enough to handle any kind of situation; whether it is an ethical situation
or some situation where rationale decision has to be made. In order to highlight how different
practical questions about ethics of medical field can be tackled, a scientific article can be
reviewed. For the analysis of various ideas, a paper published in 2007 will be reviewed.
The ideas presented in that paper will then be analysed in order to judge whether they are
appropriate enough to be applied in accordance with practical questions of ethics involved with
medical machines. Lastly, a short conclusion will be presented at the end of this paper that will
present final verdict about whether ideas presented in article can be applied or not.
First of all, to outline some of the major practical questions that concern everyone includes
whether medical machines should be programed to follow a code of medical ethics or not and
should machines, who help surgeons and health care professionals, share responsibility with
humans for the ethical consequences of medical actions.
Idea of scenario analysis
The author of article does not consider Artificial Intelligence (AI) as an effective
terminology, rather he prefers to use AGI (Artificial General Intelligence) (Goertzel, 2007).
With respect to the idea of scenario analysis, it can be said that it is not an appropriate method
to control and program a medical machine. The rationale for this decision is that in case of
scenario analysis, as described in the article, it may be helpful for simple tasks instead of
complex ones. Predicting something with specific abilities may turn out to be bad in cases
where specialty and expertise is required. Therefore, it can be easier if the medical machine is
not provided enough capabilities that may create any issue. In this way, the decision can also
be altered at the end point, which may prevent the medical machine from making any
vulnerable decision or action.
Idea of coherent extrapolated volition scenario
Another idea presented by author is of coherent extrapolated volition scenario. In this
idea, it was described that future predictions as well as making a decision for future is
considered to be very difficult. For this reason, a specialized narrow- Artificial Intelligence
(AI) can be produced so that goals can be achieved. Through this narrow Artificial Intelligence
(AI), the actual needs of humans or subject may get identified. Author also indicated that
however, the list of unpredictable behaviour may be long but the indeterminacy cannot be
neglected, which may give rise to technological singularity. Through technological singularity,
no matter what kind of superhero is designed or developed with what- so- ever intelligence; it
is very difficult to predict what the next step of such an intelligent machine might be (Goertzel,
2007). Corresponding to this above scenario of coherent extrapolated volition, it is important
to note that singularity will enable the machines to behave in an unpredictable
manner (Goertzel, 2007). As far as the situation of medical machines is concerned along with
whether they should use medical code of ethics and should they be held responsible for any
mishap that happens, then the vote goes for yes. This is actually because when machines
become such intelligent, anything should be expected from them too. Along with this, it is
important to consider that when machines gain human- level intelligence, then they can take
any action that a human can take and for that reason, if something goes wrong due to their
negligence; it is possible that the medical machines should also be blamed for issues.
Idea of AI brain filled with linguistic knowledge
This was an interactive idea that was presented in article by author. According to this
idea, it was mentioned that AI brains filled with linguistic knowledge can be used . With
continuous interaction with linguistic knowledge, it is very likely that the artificial brain will
adapt the spirit of it too. In this idea, an algorithm can be used which is known as the adaptive
language learning algorithm. With the help of this algorithm, artificial intelligent machines will
be able to learn a huge lot of things that may assist them in helping out physicians and surgeons
in the health care setting or operation theatre, respectively.
Moreover, with respect to the practical questions, it is also considered that even though,
medical machines may get a huge lot of information, but that does not mean that they might
not malfunction; it is an obvious fact that intelligent machine is a machine. The intelligent
machine after acquisition of knowledge or information does not become human and thus the
machine should be programmed to follow code of medical ethics. In this way, one might
consider the medical machine with a human level intelligence as well as medical code of ethics,
capable enough to provide rational decisions and reasoning.
On the other hand, through such AI brains, a medical machine will also be held
responsible for any issues that may arise during operation; or in worse case, if the patient dies.
This is so because when the medical machine becomes capable of language as well as all the
techniques than it is expected from the machine that it will behave in a manner which is
appropriate and expected.
Conclusion
In summary, this paper helped to understand that no matter how much the field of
artificial intelligence grow, despite a promising future, there remains very serious issues that
should be handled prior to any such inclusion in the medical field. As mentioned earlier that
medical field is a sensitive field, thus, it is very difficult to say that medical machines should
not be blamed if anything goes wrong and they should strictly follow medical code of ethics. It
can be concluded from the article that the Kurzweil’s book has done a great job by explaining
one particular plausible scenario regarding the artificial intelligence in which human-level AI
comes through human brain emulation. There is also another scenario discussed that humanlevel AI comes through non-human brain emulation also.
Also, the concept used to describe the singularity is serious and is highly valuable to be
considered for practical of AI in future. Also, the author has focused on the fact that pursuit of
human level AGI should be taken very seriously. If not so but it should be considered a grand
pursuit in other areas of science and engineering. Also, it has been stated by McDermott in his
critique of Kurzweil that he should stop writing and should not carry much research in the field
of AI. However, the author believes that McDermott has been indeed harsh on many statements
on Kurzweil and he should not have said that. But the author has also himself disagreed from
Kurzweil on number of instances.
It can hence be indicated here that AGI has a great future in coming centuries if taken seriously.
There is a need get rid of the over-enthusiastic nature in this area and adopt a narrow approach
to deal with the situation. It is true that the future of AIG cannot be predicted but there is a dire
need to adopt concepts such as scenario analysis to be successful in the future.
Who is responsible if the machine performs poor surgery?
Human safety during surgical procedures is of paramount importance in the health care
industry. Various deaths occur during and post surgical procedures. Medical malpractices
account for about 225 000 deaths annually in the United States. A significant number of these
deaths are attributed to deaths in the theatre. Some of the deaths are due to surgeons’
negligence and inappropriate prescription and procedures. Due to the recent rapid
development in the technology sector, various machines have been developed to help
surgeons in carrying out safe surgical procedures. Virtually all machines are properly tested
after their manufacture and before their release to be used in various hospitals; however
various cases of doctor’s negligence have led to various deaths. Some doctors do not receive
adequate training before embarking the use of these machines. Some of these machines are
complex and requires vigorous training from the manufacturers. Various cases of deaths have
been blamed on the surgeon’s improper use of machines due to lack of adequate training or
due to negligence (Carayon and wood, 2010). Surgeons are aware of the happenings and
causes of death during surgery than any other stake holder including the designer,
manufacturers as well as programmers. Once in the theater the patients care is under the
surgeon’s hands.
Failure to do proper cleaning
The surgeon has the duty of ensuring that the machines and other devices used in the surgery
are in a sterile condition. Contaminated machines are likely to introduce infections known as
nosocomial infections. Nosocomial infections are acquired in hospitals and can be easily
introduced in the patient’s body during surgeries. Some of these infections can be fetal
causing deaths few hours after the surgery. Quality control should be performed on the
machine regularly to ensure the machine is clean and in proper working condition. According
to a report by the New York Times, eight people got infected after an open heart surgery in
well span York hospital in United States. Four of the infected patients died as a result of
infection. The situation was serious that the hospital had to look and assess patients who had
undergone surgery using the device. As a result, 1,300 patients treated from 1st October 2011
and 24th July 2015 were called back for assessment. According to the food and drugs
administration agency, 32 patients were infected through a device that is used in the heating
and cooling of patient’s blood during open heart surgery. The bacterium identified is known
as nontuberculosis mycobacteria (NTM). NTM is commonly found in water and soil. They
are not harmful in healthy human beings but can cause serious infections in
immunocompromised patients. This case was complicated due to the fact that the isolated
bacteria was multiple antibiotic resistant. The machine used in the heating and cooling of the
blood is known as heater-cooler device. The device employs use of water in regulating
temperatures using alternating brackets of heating and cooling effects. Although the water
does not get direct with the patients, the possibility of infecting patient is possible through
contamination of air through the exhaust vent. From the case in York hospital, it is clear that
the doctor’s failure to maintain the device in a hygienic condition was the cause of infection
during the surgery. Dr. Hal Baker who is in charge of infection control in the hospital
admitted that the manufacturer provided a cleaning manual for the machine but the hospital
did not follow the manual in cleaning hence the blame cannot be on the manufacturer
(Tavernise, 2015). If the doctors were keen on the cleaning and hygiene of the device,
infection during the surgical procedures could have been prevented.
Using wrong reagents
All machines used in the medical services require human operation and general
medical knowledge for them to work efficiently. Sometimes the surgeons may use the wrong
reagents in a particular machine. The manufacturers of medical devices always give
recommendation of the reagents that should be used together with their equipment. During
design and development of surgical machines, manufacturers always test them with the best
reagents to give the best results. Some machines are designed to automatically detect
inappropriate reagents while some do not detect. In some cases, surgeons ignore the
manufacturer’s advice and goes ahead to use reagents of their choice probably due to low
cost of second hand reagents. Most machine designers and programmers recommend their
reagents from their brand to be used together with their machines. The user should also be
careful while dispensing the reagents. Correct volumes should be used. Low volumes could
lead to malfunction or breakdown of the machine which could lead to poor surgery.
Using wrong specimen due to negligence
In other cases, the surgeon negligence leads to mistakes in medical requirements that
a machine cannot detect. For example, in case of compatibility factor, machine will not be
aware of the patient’s state before surgery. They are only used to test the samples or aspects
such as the state of the organs to be implanted or blood to be transfused. During surgery the
machine requires blood transfusion to compensate for the lost blood during the surgery. The
doctors should always feed the device with blood compatible with that of the patient. In cases
where wrong blood group is feed to the machine, most machine will go ahead to administer
the blood leading to complications due to incompatibility factor. The same case happens in
the case of a wrong organ is feed to the device for implantation. On 7th February 2003, a case
of doctors putting a wrong organ into a patient was reported in Duke University hospital. The
surgeons in the hospital placed a wrong organ in a teenager patient named Jessica Santillan.
The doctors implanted an organ from a donor with blood group A while the girls blood group
is O. individuals with blood group can only receive organs, tissues or transfusion blood from
individual with blood group only. Organ mismatch leads to rejection of the organs and most
cases leads to death. Incompatibility blood transfusion leads to body reaction where the body
reacts against antigens from the foreign blood. In most cases the reaction will start even
before the surgery is over leading to body reaction and possible death. Jessica died two weeks
after receiving organs of heart and lung transplants under a single surgery. She suffered brain
damage after attempt of a second transplant procedure. A post analysis on the case showed
that the main cause of the error was failure to check the compatibility of the organs before
transplant (Carayon and wood, 2010).
A report by the institute of machine (2006) in United States showed that wrong
medication is the main cause of complication in surgeries. Some machines require prefeeding of some medication to be administered during surgeries. Some of the drugs
administered are anesthesia to keep the patient asleep and painless during the entire period of
the surgical procedure. The medication error can be fetal leading to instant death. Wrong
dosage is a common form of drug errors. The machine will administer the drug genre and
dosage depending on the information feed by the user. If the use feeds the wrong prescription
or dosage to the patient, then the only person to blame is the use since the designer and the
programmers as well as the manufacturers had set the machine to use follow the instructions
feed by the user. Patients requiring intensive care are more at risk of medication danger due
to the volume and number of drugs involved. The problem in medication during surgery is
serious to an extent of world health organization (WHO) conducting out a study in 2009 to
assess the extent of the problem. After the study WHO issued some guidelines to be followed
to avoid mismanagement of medication during surgeries. After the guidelines were issued the
WHO conducted a study to assess the usefulness of the guidelines in lowering the number of
deaths. The study involved 8 hospitals in various nations across the globe. The nations
chosen were India, Jordan, Philippines, Tanzania, New Zealand, England, Canada and the
United States. The guidelines were set to follow good practices during various steps of
surgical procedures such as having a checklist, sign in list for patients, identifying the
patient’s surgical requirement and surgical site, confirming patient’s identity during signing
out, taking in consideration of the patients care and recovery after surgical procedures. The
intervention proved to be effective as the death rate in surgery dropped from 1.5 percent to
0.8 percent. The complication rate dropped from 11 percent to 7 percent. Despite the
effective results, it was observed that the effectiveness of the common intervention applied
varied across the hospitals. To understand the real situation and the effectiveness of the
intervention, WHO concluded that it was important to understand the real system and the
actual process that was redesigned during surgical intervention. Such studies would require
the help of experts in system engineering and human factors (Carayon and wood, 2010).
From the study by WHO, it is evident that most of the errors that lead to surgical deaths arise
from the hospital level during surgery. Medical personnel mistakes in using machines
contribute to a significant number of deaths during surgery.
Wrong communication
Some risks posed by the misuse of equipments can be attributed to human errors
during transition from one health specialist to another; this could be from the nurse to the
surgeon. The nurse is responsible for preparation of the patient before surgery and also the
nurse takes care of the patient after the surgery. Some of the procedures done by the nurse
before surgery includes the preparation of the machine to be used in the surgical procedures.
A clear communication on the condition of the machines should be communicated to
everyone involved in the surgery. Issues such as the calibrations, sterility, reagents and
medication in the machine should be communicated to avoid preventable errors during the
surgical procedure. Aspects such as medication including anesthesia should be done by the
nurse in consultation with the surgeon. Wrong calibrations by the nurse can lead to errors that
might lead to death during surgery hence the importance of notifying the surgeon on any
calibrations done on the machine. Another important aspect of the transition communication
involves the sterility condition of the machine. If the nurse did not sterilize the machine, then
they should communicate the message to the doctor so that the doctor can sterilize the
machine before commencing the surgery. Failure to sterilize the equipment may lead to
infection of the patient during the surgical process (Carayon and wood, 2010).
Conclusion
From the above discussion it is clear that the person who should be blamed when the machine
performs poor surgery is the user of the machine. The users comprise of the health specialists
including nurses and surgeons who are involved in the surgical procedures as well as any
person who directly comes into contact with the machine during surgery. Most designers and
programmers of surgical machines perform adequate pre-testing of the machine before they
hand them over to the users in the hospitals. Most poor machine surgeries are due to human
errors at the hospital level. All these errors are attributed to the mistakes by the users who
receive them for their use in the hospitals. It is important for the manufacturers to offer
adequate training to the hospital staff of how to use and maintain the machines. They should
also visit hospitals regularly to assess the working condition of the machines. The use of the
machine should always follow the manufacturer’s manual to avoid mistakes. In case there is
something they don’t understand, they should always consult manufacturers and guess work
should be avoided where human life is involved.
Should machines share responsibility with humans for the ethical consequences of
medical actions?
In most health care facilities, machines are often utilized and are near sick individuals and
sometimes with the most vulnerable human population members. Machines in these settings
undertake very significant and interactive medical functions that need knowledge of medical
codes, human dignity, human privacy, and emotional sensitivity. All these requirements
apply only to patients who are in fragile states of health, who have various kinds of cognitive
or physical disabilities, and the old or young affected by chronic illnesses.
As the machine's technological advancements continue to become more popular, ethical
concerns emerge such as if it is morally right to program machines which follow a particular
code of medical ethics or even theories that can hold back medical based machine conducts.
The Realism of Medical Machine Ethics
Technological advancement has led to the introduction love-bot, driverless locomotives,
robo-docs, and even robo-cops. Today, thousands of automatic mechanisms in the form of
robots are already being utilised in the provision of medical services. These robots are
involved in the diagnosis of illnesses, monitoring of health status of a patient, and provision
of surgical services. There is much speculation that the sector dealing with the production of
robotics is to experience a significant growth over the next five years. However, concerns
about their responsibility alongside that of humans for ethical implications associated with
the provision of medical services are evident. Such fears are propelled by the uncertainty of
whether the machines can be accurate when they operate without any human specialists'
assistance.
The most significant application of these machines is in robot-assisted surgical services and
therapies. According to the International Federation of Robotics (IFR), it is evident that the
total value of medical robots regarding their sales is about two million dollars. Machines
involved in the provision of medical services are the most valued service providing robots.
They have an average price of approximately one million dollars per unit inclusive of their
services as well as their accessories. The utilization of machines in the discipline of medicine
has been a reality over several decades. Even though the sales of machines associated with
dissemination of medical services has experienced a decrease in its sales by five percent in
the year 2014 compared to the previous year, the outlook for this market is expected to grow
in the coming years. The sector is speculated that it can attain over $11.4 billion by 2020 as
per the report from the publishing agency of Market and Markets.
Factors that enhance the demand for machines in medical field include the necessity to have
efficient, precise, and minimally invasive surgical operations. The growth of the application
for this kind of surgical services as a consequence of the increasing disease incidence levels
can lead to market expansion. This trend is not worrying specialists since they are for the
utilization of machines in sensitive areas such as medicine. They believe that machines are
necessary and even beneficial since they contribute more towards the provision of precise
surgical services as well as reduce the patient's recovery time. The author of "Medical
Robots," Achin Schweikard, affirms that more operations with high accuracy standards
demonstrated by medical robots increase the patients' outcomes regarding their
comfortableness as well as recovery periods.
Types of machines involved in provision of medical services
There are about four kinds of machines in the field of medicine currently. They include
navigation robots, which participate in moving the surgical instruments around to enable a
surgeon access his or her tools quickly. Another type is a robot engineered for motion
replication. These kinds of automated machines replicate motions of the surgeon's hand
through a passive robot-based interface. Also, robots for imaging are utilized in the provision
of medical services. They have an imaging instrument that is mounted on a platform, robotic
arm, to develop two-dimensional or three-dimensional images. There are also robots for
rehabilitation purposes that provide care for those patients suffering from long-term illnesses.
They are fitted with mechatronic instruments, which enhance the recovery process of
individuals suffering from stroke.
Projects of coming up with robots in the form of wheelchairs and prostheses are in progress.
Such machines are developed with much consideration to allow human brains to control
them. It is from this perspective that has led to the introduction of Dronlife, which can
transport vital human organs more efficiently as compared to the traditional methods.
Although robots contribute to the revolution of some medical practices, experts are concerned
that users ought to be very careful when these machines are within their medical settings.
Concerns
An introduction of any new methods into a sensitive area like medicine faces many risks. In
surgery to be specific, barriers for the accommodation of new technologies are very high.
Whereas there is an ongoing debate about the merits and risks of the utilization of machines
in medicine, creators of these robots only foresee a hopeful future for this technological
advancement.
There is a necessity to develop medical machines that are safe since their presence in various
nursing facilities continues to grow. For instance, by mid of the 21st century, approximately
twenty-five millions of people from the western part of Europe will be sixty-five years old,
and this can increase the demand for advanced healthcare systems (Peter, 2013). The
effective systems for the provision of appropriate healthcare can only be achieved by the
introduction of medical robots.
The behavior of such machines ought to be mitigated by humans within the scope of human
ethical values. If not so, they can be dangerous in case their conducts become unpredictable
and this can present a potential cause of harm to patients. A medically based machine if it is
not programmed on how to conduct itself during an emergency can make the condition of a
patient worse. To avoid such scenarios, it is prudent that creators of medical machines
consider basic ethical virtues that can be applicable in all situations, which may occur within
a medical setting.
Another concern is that science fiction always exhibits much fear. The development of
ethical-constrained medical machines may make a community embrace researches related to
the discipline of artificial intelligence. However, medical machines into which ethical
constraints have not been incorporated seem too risky to be accepted in the society.
The ever-increasing interest of wanting to know who bears the responsibility for moral
consequences associated with a particular medical action is another fear (Peter, 2013).
Humans utilize their intuition when making moral decisions and this feature presents a very
potent heuristic practice. At times, humans may be very poor in making unbiased or impartial
decisions without considering their biases. For medical machines, they methodically ascertain
the best decisions than people, since their actions are based on moral principles and systems,
programmed into them. This means that machines are more accurate as compared to humans.
What is in the field is not always, what is programmed into these medical machines since
some learn from the user's life experiences. If such as an event takes place, it is difficult to
charge the machine's designer as well as its trainer since there is no legal framework on who
is responsible. The people who can bring medical benefits to patients and the community at
large through the development of medical machines can be scared and fail to achieve that due
to legal uncertainty. This clearly present the reason no legal frameworks have been put in
place to define who bears responsibility between machines and humans for ethical
consequences arising from the medical actions they take.
Case study
In the year 2007, a machine used to perform surgical operation broke its component in the
course of the procedure for prostate cancer. It caused a severe fracture in the patient to an
extent that the urologist had to incise the area increasing the size of the wound. This was an
attempt to remove the broken robot arm out of the patient's vital organs. From this
perspective, a question arises on who should be held responsible for the medical action that
led to the accident. It becomes a challenge to blame the surgeon who uses the machine or its
manufacturer (Dvorsky, 2013).
Analysis
Medical robotics is a technological advancement that is increasingly becoming an issue in the
society, particularly when allocation of responsibilities is involved. It is not always obvious
to blame the caregiver using the machine or its manufacturer if it hurts the patient by accident
or even breaks the law. A recent study has revealed that surgical procedures that involve the
usage of machines have no real merits over surgeons who have proper training and
instruments. From this point of view, autonomous systems for provision of health care
services are yet to be blamed and held responsible for any medical actions they take.
The companies that manufacture these machines have lawsuits filed against them. For
instance, Intuitive Surgical firm that is based in Sunnyvale, California, has about ten product
liability related lawsuits against it over the past few months. However, the company claims
that it is yet to ascertain the validity of the claims that their medical machines can cause
spleen and liver punctures when performing cardiac surgeries (Dvorsky, 2013). Besides,
other lawsuits filed against it reveal that the robots may cause damages to the rectum when
carrying out a surgical prostate operation and vagina hernia after they perform a
hysterectomy. There are concerns also that the machines cause unintended burns to the
patients as they cauterize their surgical instruments to block excessive bleeding. The intuitive
Surgical firm is bold that more than two thousand five hundred of its machines developed for
the provision of medical services are at work in various hospitals globally (Dvorsky, 2013).
It is not immediately demonstrable that the manufacturers of these medical machines take all
the responsibility and blame that can rise from the medical actions they take autonomously or
those directed by their operators. These corporations only provide a tool that is under the
supervision and control by human beings. For instance, surgeons are the ones who control the
four arms of the robot from a panel that is fitted with stereoscopic three-dimensional view of
what takes place during a surgical operation. This means that the legal issues that firms such
as Intuitive Surgical may face are likely to be down to improper usage by surgeons rather
than failing of the medical machines.
It is apparent that there is the development of a learning curve towards modern technological
advancements engineered for provision of medical services particularly surgery (Dvorsky,
2013). This trend exposes users especially the surgeons to an increased complication rate
since these machines undergo rapid technological evolution becoming more and more
sophisticated. The situation if further aggravated by the poor training that surgeons receive
and sometime it lasts only two days.
The increased reliance on Artificial Intelligence in the field of medicine challenges the idea
that humans are the sole entities, which responsibility of their medical actions can be ascribed
to. This means that medical service providing machines are moral agents so long as they have
a higher degree of autonomy with minimal direct control from other agents when carrying out
their tasks. Such devices perform medical roles that carry with them some obligations as per
the manner they are programmed. Therefore, machines if autonomous enough can perform
similar operations that a human nurse or surgeon does and will have a full understanding of
their purposes and responsibilities in dissemination of health care services.
Recommendations
Since it is a challenge to determine who is to take responsibility between humans and
machines of the consequences that result from their medical actions, it is prudent to rethink
on the concepts of moral obligations. A malpractice framework of responsibility ought to be
developed and adopted by all stakeholders involved in the usage of autonomous machines in
the field of medicine. The model centers on the determination of the appropriate party to
blame as well as ascribe responsibility for the harmful incidents that arise from a particular
medical action.
The distance between machine manufacturers and the consequences associated with their
usage can be integrated into the model to affirm that there is no immediate and direct
causative link that ties them to a malfunction. This aspect can only be valid if developers
insist that their contribution to a machine's malfunction is negligible. The model entices
machine manufacturers to keep away from blame and accountability of the consequences
associated with the medical actions they perform (Noorman, 2012).
Conclusion
Sharing of responsibility between machines and humans for the moral consequences of their
medical actions require one to factor in the various ways in which these technological
advances mediate human actions. Moral obligation is not only concerned about the actions of
machines or people, but also, their actions as shaped by technology.
Utilisation of robots in health settings presents a powerful instrument that motivates
provision of better, trustworthy, and reliable health care services to patients. Holding
individuals accountable for the risks or harms caused by them demonstrates a strong
incentive that can supply a limiting point for the assigning of punishments. However, the
current organisational and cultural practices do the opposite. This is primarily caused by the
conditions under which these machines are deployed, developed, their capacities, and popular
perceptions about nature.
Should medical machines be programmed to follow a code of medical ethics?
Artificial intelligence is one of those fields that might show spurted growth in near future.
Due to several ethical and moral reasons, there are certain aspects that hinders success of this
field. Different scenarios suggest that machines can also perform better in medical sciences;
however, there are certain aspects that need to be controlled in machines so that ethical and
moral issues do not arise (Rysewyk & Pontier, 2015).
Let’s take an example of a hospital setting where a robotic nurse will be asked to look after a
patient. In such a scenario, the robotic nurse may be used to remind patient for medicines
when it’s time to take a dose. Since patients admitted in a hospital may have different reasons
for admission, there may be certain factors due to which patients might behave furiously if
they are reminded to take medicines (Rysewyk & Pontier, 2015).
The patient may also yell out at robotic nurse or may also pick and throw something on robot.
In such scenarios, it is difficult to predict what might be the reaction of robotic nurse, since
it’s just a machine. This is why such machines are termed to have artificial intelligence.
Through artificial intelligence, robots can prepare their forthcoming actions depending upon
the scenario they face.
NAO robot
In a research on robotics, conducted in 2005, a NAO robot was programmed specifically to
perform simple tasks such as the robot discussed above, that is, to remind a patient to take
medicine. According to programming, the robot was responsible to take medicines to patient
when its time and if the medication is not taken by patient due to any reason, the robotic
nurse is supposed to inform patient’s doctor about the behaviour (Rysewyk & Pontier, 2015).
In artificial intelligence and in modern world advancements, there can be chances that such
robots are also programmed who react in response to any situation they encounter; for
example, like the one discussed above where the patient may deny to have medicines and
may also misbehave. In such a situation, it is not possible that the robotic nurse force a
patient to have medicine in any way. This is where the need of medical ethics in medical
machines arise (Rysewyk & Pontier, 2015).
Algorithms and ethics
To build an ethical robot, it is one of the thorniest challenges to face (Deng, 2015) as well as
accomplish where the researcher or programmer would need to amend programming and
several features of such a robot hundreds of time. Different algorithms can be utilized in
order to check feasibility of the robot and also to determine whether the robot is capable
enough to perform in a medical, sensitive, environment. In the example of hospital setting as
discussed above, it is important to note that in such scenarios, robots have to act sensibly and
emotionally. In such cases, emotional values have to be taken under consideration. In a
normal health care setting where human nurses attend patients, needs of patients as well as
their emotional well- being is the responsibility of nurses. It is also significant to understand
in such a scenario that patients expect a lot from their care takers. Therefore, there is a need
of flexible algorithm where the robot may amend actions depending upon the requirements of
patient (Rysewyk & Pontier, 2015). It is also possible that there is a need of rationale
decision from robot. For example, in doing so, the robotic nurse has to analyse situation and
weigh benefits over disadvantages if skipping medicine. If the injury is serious and
medication has to be taken on time without any delays, then robotic nurse has the duty to
inform doctor right away plus try to convince patient for medicine. However, this may make
patient further furious too and thus, there is a requirement of flexible algorithm.
Flexible algorithm plus hierarchical values
Through flexible algorithm and programming, robotic nurse will be able to rationalize the
situation and identify whether medication at the right moment will benefit more or no
medication would harm more. If, for instance, the disease for which patient is admitted
cannot harm patient much, then the robotic nurse can let go and wait for any other senior staff
to take over the matter. In technical terminology, it is said that for such situations, robot has
to be given a set of hierarchical values that helps it to determine what is important and what
is not. In near future, as it can be seen that artificial intelligence is capable to perform such
activities and can also handle situations where it has to take care of itself. Therefore, it
implies that the robots will have to consist a built- in insight or sense of various factors that
may produce a different outcome if performed in a particular manner.
A sense of justice should be present in robots even if they are programmed to perform some
mundane tasks, as slight mishandling or mistake may result in serious issues too. A good
example to consider here is of a robotic nurse who is asked to change the channel of TV
which is watched by several patients. Although, robotic nurse has to perform the action as it
is asked to do it but in doing so, there are certain factors that it has to consider; such as factor
of who is the patient who wanted to get channel changed? Whether or not he was watching
TV before? Is it correct to change channel just upon request and wish of a single patient?
Also that how often each patient’s request of changing the channel that everyone was
watching has been fulfilled? While analysing the situation and breaking down its solution,
robotic nurse required medical- machine ethics (Rysewyk & Pontier, 2015).
Analysis and Evaluation
The word robot stirs up many thoughts in one’s mind. Some will think of a metallic body,
others will think of an industrial arm and many will start thinking about their jobs when they
were replaced by the heavy machinery. Since the time the first machine was invented, it has
made our life easier by replacing the effort and manpower we put into getting things done,
with greater precision and efficiency. Medical robotics is not a new field, since there are a
number of mundane tasks to be performed in a hospital. The simple tasks were easily handed
over to the robots but now with the exponential development in the field of robotics and the
greater percentage of their success, it has carved its way into surgeries. As far as the robotic
applications that carried out activities like running medical tests, performing the routine
functions e.g. giving medicine to the patients or helping in the rehab with moving and
positioning bed ridden patients, they were accepted and appreciated by the patients as well as
the staff. But now when they have advanced their way into surgeries, the patients are quiet
apprehensive. To find out and analyze these apprehensions I have conducted a survey. In the
section to be followed I am going to analyse and evaluate the survey.
A sample of 65 people was taken and they were given a survey.
Figure 1
From figure 1, almost 34% of the people strongly agreed that a machine can perform diverse
actions with precision, 31 % chose the next maximum score and the percentage of people
decreased with the decrease in the points. This shows that people are confident about the
precision that a robot can offer, since it only knows the correct way of doing things, it neither
tires up nor forgets. It has a very stable hand and works with a 100% concentration. It has
shortened the waiting time for getting the tests from a laboratory, so yes most of the people
think that robots are precise and accurate in performing a whole lot of activities.
Figure 2
From figure 2, people had mixed reactions to the question whether machines are now capable
to tell right from wrong. 42 % said yes, 35% said no and 23% were unsure. These
percentages reveal that a bigger chunk of people do put their trust in robots when it comes to
artificial intelligence whereas a considerable chunk is still apprehensive about giving
complete autonomy to robots. Yet another chunk is confused and doesn’t know whether to
trust these pieces of machines or not.
Figure 3
From figure 3, when it comes to having robotic assistance in health care setting, people have
mixed responses. 11% gave it the maximum score, i.e. 5 points ,15% gave it 4 points and
40%, 26% and 11% gave it 3,2 and 1 point respectively. These responses show that people do
not yet completely trust robots. The bigger chunk is more inclined towards a yes even though
the strength of the ‘yes’ varies, but the 37% who do not seem to have a soft corner for these
inventions cannot be neglected and should be catered to
.
Figure 4
From figure 4, the issue of ethics happens to pull many sensitive strings, people, when asked
whether the robots should be programmed with the medical code of ethics, had very clear
responses. 82 % of the answers were a ‘yes’, 8% of the answers was a ‘no’ whereas 11%
were unsure. It is the ethics that distinguish the humans from the rest of the organisms on the
planet, and if they are inventing machines to assist them or replace them in certain tasks, then
first and foremost they have to incorporate their code of ethics in those entities.
Figure 5
From figure 5, when asked that if the ethics part was taken care of, will that make them trust a
robot for conducting a surgery? 21%, 28%, 29%, 14% and 8% of the people scored 5, 4,3,2,1
points respectively. This shows that there is still a considerable chunk of people who would
think twice before undergoing a robotic surgery. A machine that has to look into its code
before making a decision, what if it encounters a situation where it meets a dead end? These
are the questions that give birth to doubt in such cases. Whereas the bigger slice which trusts
a robotic surgery could be considered as an achievement and its proof enough that robotics is
making a mark and are gaining the trust of the masses with time.
Figure 6
From the figure 6, the results show that most of the people are not comfortable with a robotic
surgery. They don’t have complete faith in a robot as far as a surgery is concerned. One of
the reasons why this is that it is not a very common practice. It’s not every day that we find a
person who went through a robotic surgery and is still up and about. This field is in its
embryonic stage and with time it will grow as a reliable means of surgery.
Figure 7
From figure 7, in case a surgery goes wrong, i.e. not as expected or planned by the surgeon,
the people are most likely to blame the robot rather than other factors. They think that the
robot should be held responsible for the failure of the operation, but then again, those who
think otherwise also have an opinion that counts and cannot be neglected.
These results show the faith of the general people in robotics and the acknowledgment of its
services in the health sector. Without robots the health industry efficiency and effectiveness
will deteriorate exponentially. Even though people understand the usefulness of a robotic
surgery, they are reluctant to opt for one. The reason behind this is lack of awareness. They
need to be properly guided and educated that it’s the brain of the surgeon and the accuracy,
precision and stability of a robotic arm, which would not shake under the most stressful
situations. The incisions made by a surgeon are going to be bigger than those of a robot, there
is going to be less tissue damage and therefore the healing will be easier and quicker. If these
facts are properly communicated to the masses, it’s going to be only a matter of time when
people will voluntarily opt for robotic surgeries. They are only making it possible to perform
more intricate surgeries more effectively.
References
Anderson, M. and Anderson, S.L., (2007) [1]. Machine Ethics:Creating an Ethical Intelligent
Agent. American Association for Artificial Intelligence, 28(4), pp.15-25.
Ross, W. (2008). The Basis of Objective Judgments in Ethics. ETHICS, 37(2), p.113.
Pana,L.(2006). Artificial Intelligence and Moral intelligence. tripleC, 4(2), pp254-264
Kant, I. (2004). Book Review: The Philosophy of Immanuel Kant. Immanuel Kant, A. D.
Lindsay. ETHICS, 24(4), p.475.
Rekabadar.(2012). Artificial neural netwoek ensemble approach for creating a negotiation
model for ethical artificial agents. ISAISC ,2, pp493-501.
Tonkens, R. S. (2007). Ethical Implementation: A challenge for Machine Ethics. Toronto:
New York univ.
Peter, S., 2013. Artificial Intelligence. 2 ed. London: Springer.
Michalski, 2013. Machine learning: An artificial intelligence approach. 1 ed. London:
Springer Science and Business.
Anderson, M. and Anderson, S.L., (2007) [2]. Machine Ethics:Creating an Ethical Intelligent
Agent. American Association for Artificial Intelligence, 28(4), p.15.
Muehlhauser, L. and Helm, L., (2012). The singularity and machine ethics.In Singularity
Hypotheses
(pp. 101-126). Springer Berlin Heidelberg.
Gillies, D., (1996). Artificial intelligence and scientific method.
Goertzel, B., 2007. Human-level artificial general intelligence and the possibility of a
technological singularity: A reaction to Ray Kurzweil’s The Singularity Is Near,and
McDermott’s critique of Kurzweil. Artificial Intelligence, Volume 171, p. 1161–1173.
Poole, D. & Mackworth, A., 2010. Artificial Intelligence: Foundations of Computational
Agents. s.l.:Pearson.
Kanal, L. & Kumar, V., (1998). Search in artificial intelligence. Springer Science & Business
Media.
Raessens, J. & Goldstein, J., (2011). Handbook of computer game studies. The MIT Press.
Carayon, P., and Wood KE., 2010. Patient Safety: The Role of Human Factors and Systems
Engineering. Studies in Health Technology and Information, 153, 23–46.
Tavernise, S., 2015. 4 Dead After Being Infected by a Device in Surgery at a Pennsylvania
Hospital. The New York Times, Octomber 26, 2015. Available at:
http://www.nytimes.com/2015/10/27/science/4-dead-after-being-infected-by-adevice-in-surgery-at-a-pennsylvania-hospital.html?_r=1 (Accessed on 30th March
2016)