I profited so far very much from the impartiality of machines and machine-like institutions, like long distance services of the libraries and online archives. Siddhartadevi describes that feature as characteristic for computers, even more so in future "sentient" machines. But my guess is that impartiality of such machines will reduce quickly. So far, it results from the lack of software and data about people, which politics furiously (NY Books quotes aims towards Yottabytes by 2015, here a new article on England) tries to change. Here is a related link collection by a lawyer and writer. Very good is the work of the "foebud", the think tank of the CCC, and it's "Big Brother Awards". As a rule of thumb one can say that today, data by the internet, credit cards etc. cover each individual roughly to the same amount as in the 1970's special police forces did with suspected terrorists. I guess the decisive experience of governments throughout the developed world for this was the 1989 collapse of the eastern block, which showed that populations can make sudden phase transitions and that these escaped even the absurdly dense monitoring of the populations by the Stasi. The main goal for AI is to enable computers to analyse communication contents sufficiently fast and accurate, for which e.g. the gigantic google bookscan has been made (acc. to a remark by one of the google founders). Once that is solved, prejudice-like behaviour of the internet follows.
Indifference towards that need an explanation - perhaps the dream of happy interaction with "sentient" computers results from an experience of self-loss? When the internet turns into a semi-permeable membrane between the individual and the worlds of content and meaning beyond it, doubts in it's "friendlyness" would be perceived as personal threat and then inhibited. This makes wonder, which patterns exist for the human-machine/city/civilization relation: I guess, aside the "symbiotic" model there are "nomadic", "hunter-gatherer", "peasant".