If science fiction has taught us anything , it’s that one day all computers will rise up in unison and murder each and every one of their human masters.

And, in a way, I find that comforting. Not the idea itself (though 2001: A Space Odyssey is a personal favorite), but the fear that inspires such a paranoid vision, however illogical it may be.

In a time when the word “disruptive” means something positive, in an age when people spend more time interacting with gadgets than with each other, it’s appropriate—maybe even necessary—to have a healthy distrust of our electronic denizens.

After all, they were born in a completely illogical place, a realm that doesn’t make sense to almost everybody, where just because something never existed before doesn’t mean it shouldn’t now. It’s where Einstein and da Vinci and Jobs lived. It’s pure imagination.

And in the end, I think what makes us human—what separates us from our digital children—is the ability to be illogical, to imagine the unimaginable, to fall in love or declare war, to believe and have faith, to act irrationally. What makes us human is our ability to see the limits of logic or “conventional wisdom” and transcend our own programming.

Take revenue management software, for example. The conventional wisdom is that it optimizes revenue, and it does. But I was talking to a couple of the nation’s savviest operators last week, and they said that at this point in the cycle, they’ve decided to turn it off in some submarkets.

“If you have a submarket where everybody is using revman, it has the potential to drive everybody’s rents down due to lack of demand for some units,” says one. “At one of our bigger properties, it was telling us we should price two-bedroom units less than one-bedrooms. That was an inflection point for us.”

For a software program that measures a hundred variables and prices accordingly, that’s a perfectly logical thing to do if that’s what the data suggest. But here in the real world, that kind of logic isn’t logical.

So, we have that going for us at least, the ability to think independently. Score one for the flesh-bags. But I wonder how much longer humans can stake that claim.

Consider the Turing test, a game devised by Alan Turing—subject of last year’s acclaimed movie The Imitation Game—to see if people could distinguish between man and machine when having a “texting” conversation with one or the other.

The test is run publicly each year at the annual Loebner Prize competition, which started in 1991. That year, the most notable program was one that imitated human typos. But every year since then, the competition has grown more sophisticated, and the “most human” programs are starting to grow indistinguishable from us.

Turing, writing about the test in 1951, predicted that by 2001, computers could “play the imitation game so well that an average interrogator will not have more than a 70% chance of making the right identification.”

Until last year, no program had ever beaten those odds. But a Russian chatterbot dubbed Eugene Goostman (purporting to be a 13-year-old Ukranian boy) was the first-ever winner, fooling the judges 33% of the time that it wasn’t Pinocchio at all but a real human boy.

If a machine can fool us into thinking it’s not a machine, well, imagine the possibilities … then throw out all of your machines before they get any funny ideas (yeah, I’m looking at you, laptop).