1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Three Laws, and the fourth, are not completely appropriate for future robotic constraints but rather that their basic premise, to prevent robots from harming humans, will ensure robots are acceptable in their actions to the general public.
2. To physically and morally care and cure the infirm, needy and sick.
3. To augment rational, accelerated learning and intellectual development.