|Intelligent Systems And Their Societies||Walter Fritz|
Well, now that we know what an ethical objective and an ethical action is in an artificial IS's, (See Ethics of the artificial IS) (For continuos reading, like a book - do not enter here now)., let's look at human beings. What parts of the above discussion are identical and what parts will have to be changed? Both definitions, that of the ethical objective and of the ethical action are also valid for human beings. The reasons for the definitions and their usefulness seem the same, and are independent of wether we are treating artificial or natural IS's.
The artificial IS cannot know about an eternal, absolute "good". Also the human being, acquires its concepts about "good" and "bad" only from concrete experiences. These experiences include the teachings received from others. But, as we have seen in the chapter on Intelligent Systems (For continuos reading, like a book - do not enter here now)., they all base their concepts in the last instance on the experience of the senses. Just as the artificial IS, the human being needs no science of ethics if it lives in isolation from all other IS's, including animals. In this case the environment does not react to its actions. But nearly all human beings live in contact with other IS's and nearly all belong to a society. Here they do need a science of ethics. In the chapter on Societies (For continuos reading, like a book - do not enter here now). we have seen how important these are to an IS's. Therefore one of the high level objectives of an IS should be to protect its own society and keep it functioning well. This is an objective that helps to attain the main objective, therefore it is an ethical objective.
It is interesting to observe what persons do, if their review of a proposed action shows that it is generally of advantage but of disadvantage to a particular IS. Naturally the person can choose a different sub objective to reach the same higher level objective. But often we observe that the person is talking to another affected person: showing that the action really is to its advantage (changing the sub objectives of the affected person) or assuring it friendship, help or goods in the future (compensating disadvantages by an advantage). For instance, a mother says to her child: "Now do this and when we come home I will give you a sweet".
How about calculating the reactions to a proposed action? Normally the objectives of humans, and their importance are known according to age, sex, job, environment or other characteristics. Such a calculation is quite tedious and is probably needed only on important decisions, those of a business enterprise or a government. For the daily actions of a person, an estimate, based on experience, should be quite sufficient.
So far, all said about the ethics of the artificial IS is also true for human beings. But there is a difference. Human beings are also emotional beings, and artificial IS's often are not. As we have seen (See Emotions and Energy) (For continuos reading, like a book - do not enter here now)., emotions are indicators of the state of attainment of an objective. So if those emotions are the result of facts and not of blind belief, acting according to emotions should produce an ethical action. "Facts" here means that the emotions are the result of using response rules derived from personal experiences and not emotions based on second hand experiences, those that were communicated to us by other persons.
For continuos reading, like a book - continue
Jump to the e-book Contents / Ethics as a Science document / top of this page.
Copyright © New Horizons Press