In 1942, Asimov posited the existence of three laws that would govern robot-human relations:
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.(From Wikipedia )
And all I have to say is: danger, Will Robinson, danger.
[via Engadget ]